WorldWideScience

Sample records for preparation running monte

  1. ATLAS Data Preparation in Run 2

    CERN Document Server

    Laycock, Paul; The ATLAS collaboration

    2016-01-01

    In this presentation, the data preparation workflows for Run 2 are presented. Online data quality uses a new hybrid software release that incorporates the latest offline data quality monitoring software for the online environment. This is used to provide fast feedback in the control room during a data acquisition (DAQ) run, via a histogram-based monitoring framework as well as the online Event Display. Data are sent to several streams for offline processing at the dedicated Tier-0 computing facility, including dedicated calibration streams and an "express" physics stream containing approximately 2% of the main physics stream. This express stream is processed as data arrives, allowing a first look at the offline data quality within hours of a run end. A prompt calibration loop starts once an ATLAS DAQ run ends, nominally defining a 48 hour period in which calibrations and alignments can be derived using the dedicated calibration and express streams. The bulk processing of the main physics stream starts on expi...

  2. ATLAS data preparation in run 2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00037318; The ATLAS collaboration; Chelstowska, Magda Anna; Cuhadar Donszelmann, Tulay; Guenther, Jaroslav; Nairz, Armin Michael; Nicolaidou, Rosy; Shabalina, Elizaveta; Strandberg, Jonas; Taffard, Anyes; Wang, Song-Ming

    2017-01-01

    In this contribution, the data preparation workflows for Run 2 are presented. The challenges posed by the excellent performance and high live time fraction of the LHC are discussed, and the solutions implemented by ATLAS are described. The prompt calibration loop procedures are described and examples are given. Several levels of data quality assessment are used to quickly spot problems in the control room and prevent data loss, and to provide the final selection used for physics analysis. Finally the data quality efficiency for physics analysis is shown.

  3. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    International Nuclear Information System (INIS)

    Caon, M.; Bibbo, G.; Pattison, J.

    1996-01-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab

  4. Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer

    Energy Technology Data Exchange (ETDEWEB)

    Caon, M. [Flinders Univ. of South Australia, Bedford Park, SA (Australia)]|[Univercity of South Australia, SA (Australia); Bibbo, G. [Womens and Childrens hospital, SA (Australia); Pattison, J. [Univercity of South Australia, SA (Australia)

    1996-09-01

    The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab.

  5. Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster

    International Nuclear Information System (INIS)

    Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg

    2010-01-01

    The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)

  6. Lower Three Runs Remediation Safety Preparation Strategy - 13318

    International Nuclear Information System (INIS)

    Mackay, Alexander; Fryar, Scotty; Doane, Alan

    2013-01-01

    The Savannah River Site (SRS) is a 310-square-mile United States Department of Energy (US DOE) nuclear facility located along the Savannah River near Aiken, South Carolina that contains six primary stream/river systems. The Lower Three Runs Stream (LTR) is one of the primary streams within the site that is located in the southeast portion of the Savannah River Site. It is a large blackwater stream system that originates in the northeast portion of SRS and follows a southerly direction before it enters the Savannah River. During reactor operations, secondary reactor cooling water, storm sewer discharges, and miscellaneous wastewater was discharged and contaminated a 20 mile stretch of Lower Three Runs Stream that narrows and provides a limited buffer of US DOE property along the stream and flood-plain. Based on data collected during the years 2009 and 2010 under American Recovery and Re-investment Act funding, the stream was determined to be contaminated with cesium-137 at levels that exceeded acceptable risk based limits. In agreement with the Environmental Protection Agency and the South Carolina Department of Health and Environmental Control, three areas were identified for remediation [1] (SRNS April 2012). A comprehensive safety preparation strategy was developed for safe execution of the LTR remediation project. Contract incentives for safety encouraged the contractor to perform a complete evaluation of the work and develop an implementation plan to perform the work. The safety coverage was controlled to ensure all work was observed and assessed by one person per work area within the project. This was necessary due to the distances between the fence work and three transects being worked, approximately 20 miles. Contractor Management field observations were performed along with DOE assessments to ensure contractor focus on safe performance of the work. Dedicated ambulance coverage for remote worker work activities was provided. This effort was augmented with

  7. Lower Three Runs Remediation Safety Preparation Strategy - 13318

    Energy Technology Data Exchange (ETDEWEB)

    Mackay, Alexander; Fryar, Scotty; Doane, Alan [United States Department of Energy, Building 730-B, Aiken, SC 29808 (United States)

    2013-07-01

    The Savannah River Site (SRS) is a 310-square-mile United States Department of Energy (US DOE) nuclear facility located along the Savannah River near Aiken, South Carolina that contains six primary stream/river systems. The Lower Three Runs Stream (LTR) is one of the primary streams within the site that is located in the southeast portion of the Savannah River Site. It is a large blackwater stream system that originates in the northeast portion of SRS and follows a southerly direction before it enters the Savannah River. During reactor operations, secondary reactor cooling water, storm sewer discharges, and miscellaneous wastewater was discharged and contaminated a 20 mile stretch of Lower Three Runs Stream that narrows and provides a limited buffer of US DOE property along the stream and flood-plain. Based on data collected during the years 2009 and 2010 under American Recovery and Re-investment Act funding, the stream was determined to be contaminated with cesium-137 at levels that exceeded acceptable risk based limits. In agreement with the Environmental Protection Agency and the South Carolina Department of Health and Environmental Control, three areas were identified for remediation [1] (SRNS April 2012). A comprehensive safety preparation strategy was developed for safe execution of the LTR remediation project. Contract incentives for safety encouraged the contractor to perform a complete evaluation of the work and develop an implementation plan to perform the work. The safety coverage was controlled to ensure all work was observed and assessed by one person per work area within the project. This was necessary due to the distances between the fence work and three transects being worked, approximately 20 miles. Contractor Management field observations were performed along with DOE assessments to ensure contractor focus on safe performance of the work. Dedicated ambulance coverage for remote worker work activities was provided. This effort was augmented with

  8. One-run Monte Carlo calculation of effective delayed neutron fraction and area-ratio reactivity

    Energy Technology Data Exchange (ETDEWEB)

    Zhaopeng Zhong; Talamo, Alberto; Gohar, Yousry, E-mail: zzhong@anl.gov, E-mail: alby@anl.gov, E-mail: gohar@anl.gov [Nuclear Engineering Division, Argonne National Laboratory, IL (United States)

    2011-07-01

    The Monte Carlo code MCNPX has been utilized to calculate the effective delayed neutron fraction and reactivity by using the area-ratio method. The effective delayed neutron fraction β{sub eff} has been calculated with the fission probability method proposed by Meulekamp and van der Marck. MCNPX was used to calculate separately the fission probability of the delayed and the prompt neutrons by using the TALLYX user subroutine of MCNPX. In this way, β{sub eff} was obtained from the one criticality (k-code) calculation without performing an adjoint calculation. The traditional k-ratio method requires two criticality calculations to calculate β{sub eff}, while this approach utilizes only one MCNPX criticality calculation. Therefore, the approach described here is referred to as a one-run method. In subcritical systems driven by a pulsed neutron source, the area-ratio method is used to calculate reactivity (in dollar units) as the ratio between the prompt and delayed areas. These areas represent the integral of the reaction rates induced from the prompt and delayed neutrons during the pulse period. Traditionally, application of the area-ratio method requires two separate fixed source MCNPX simulations: one with delayed neutrons and the other without. The number of source particles in these two simulations must be extremely high in order to obtain accurate results with low statistical errors because the values of the total and prompt areas are very close. Consequently, this approach is time consuming and suffers from the statistical errors of the two simulations. The present paper introduces a more efficient method for estimating the reactivity calculated with the area method by taking advantage of the TALLYX user subroutine of MCNPX. This subroutine has been developed for separately scoring the reaction rates caused by the delayed and the prompt neutrons during a single simulation. Therefore the method is referred to as a one run calculation. These methodologies have

  9. One-run Monte Carlo calculation of effective delayed neutron fraction and area-ratio reactivity

    International Nuclear Information System (INIS)

    Zhaopeng Zhong; Talamo, Alberto; Gohar, Yousry

    2011-01-01

    The Monte Carlo code MCNPX has been utilized to calculate the effective delayed neutron fraction and reactivity by using the area-ratio method. The effective delayed neutron fraction β_e_f_f has been calculated with the fission probability method proposed by Meulekamp and van der Marck. MCNPX was used to calculate separately the fission probability of the delayed and the prompt neutrons by using the TALLYX user subroutine of MCNPX. In this way, β_e_f_f was obtained from the one criticality (k-code) calculation without performing an adjoint calculation. The traditional k-ratio method requires two criticality calculations to calculate β_e_f_f, while this approach utilizes only one MCNPX criticality calculation. Therefore, the approach described here is referred to as a one-run method. In subcritical systems driven by a pulsed neutron source, the area-ratio method is used to calculate reactivity (in dollar units) as the ratio between the prompt and delayed areas. These areas represent the integral of the reaction rates induced from the prompt and delayed neutrons during the pulse period. Traditionally, application of the area-ratio method requires two separate fixed source MCNPX simulations: one with delayed neutrons and the other without. The number of source particles in these two simulations must be extremely high in order to obtain accurate results with low statistical errors because the values of the total and prompt areas are very close. Consequently, this approach is time consuming and suffers from the statistical errors of the two simulations. The present paper introduces a more efficient method for estimating the reactivity calculated with the area method by taking advantage of the TALLYX user subroutine of MCNPX. This subroutine has been developed for separately scoring the reaction rates caused by the delayed and the prompt neutrons during a single simulation. Therefore the method is referred to as a one run calculation. These methodologies have been

  10. LHCb : First years of running for the LHCb calorimeter system and preparation for run 2

    CERN Multimedia

    Chefdeville, Maximilien

    2015-01-01

    The LHCb experiment is dedicated to precision measurements of CP violation and rare decays of B hadrons at the Large Hadron Collider (LHC) at CERN (Geneva). It comprises a calorimeter system composed of four subdetectors: a Scintillating Pad Detector (SPD) and a Pre-Shower detector (PS) in front of an electromagnetic calorimeter (ECAL) which is followed by a hadron calorimeter (HCAL). They are used to select transverse energy hadron, electron and photon candidates for the first trigger level and they provides the identification of electrons, photons and hadrons as well as the measurement of their energies and positions. The calorimeter has been pre-calibrated before its installation in the pit. The calibration techniques have been tested with data taken in 2010 and used regularly during run 1. For run 2, new calibration methods have been devised to follow and correct online the calorimeter detector response. The design and construction characteristics of the LHCb calorimeter will be recalled. Strategies for...

  11. ALICE installs new hardware in preparation for the 2012 run

    CERN Multimedia

    CERN Bulletin and ALICE Matters

    2012-01-01

    2011 was a fantastic year for the heavy-ion run at ALICE despite unprecedented challenges and difficult conditions. The data collected is at least one order of magnitude greater than the 2010 data. Thanks to a planned upgrade to two subdetectors during the 2011/2012 winter shutdown and a reorganisation of ALICE’s Physics Working Groups that should allow them to better deal with the greater challenges imposed by the LHC, the collaboration is confident that the 2011 run will allow ALICE to extend its physics reach and improve its performance.   Photograph of ALICE taken by Antonio Saba during this year's winter shutdown. The annual winter shutdown has been a very intense period for the ALICE collaboration. In conjunction with the general maintenance, modifications and tests of the experiment, two major projects – the installation of 3 supermodules of the Transition Radiation Detector (TRD) and 2 supermodules of the Electromagnetic Calorimeter (EMCal) – hav...

  12. Spent Fuel Drying System Test Results (Dry-Run in Preparation for Run 8)

    International Nuclear Information System (INIS)

    Oliver, B.M.; Klinger, G.S.; Abrefah, J.; Marschman, S.C.; MacFarlan, P.J.; Ritter, G.A.

    1999-01-01

    The water-filled K-Basins in the Hanford 100 Area have been used to store N-Reactor spent nuclear fuel (SNF) since the 1970s. Because some leaks in the basin have been detected and some of the fuel is breached due to handling damage and corrosion, efforts are underway to remove the fuel elements from wet storage. An Integrated Process Strategy (IPS) has been developed to package, dry, transport, and store these metallic uranium fuel elements in an interim storage facility on the Hanford Site (WHC 1995). Information required to support the development of the drying processes, and the required safety analyses, is being obtained from characterization tests conducted on fuel elements removed from the K-Basins. A series of whole element drying tests (reported in separate documents, see Section 7.0) have been conducted by Pacific Northwest National Laboratory (PNNL)(a)on several intact and damaged fuel elements recovered from both the K-East and K-West Basins. This report documents the results of a test ''dry-run'' conducted prior to the eighth and last of those tests, which was conducted on an N-Reactor outer fuel element removed from K-West canister6513U. The system used for the dry-run test was the Whole Element Furnace Testing System, described in Section 2.0, located in the Postirradiation Testing Laboratory (PTL, 327 Building). The test conditions and methodologies are given in Section 3.0. The experimental results are provided in Section 4.0 and discussed Section 5.0

  13. RMCgui: a new interface for the workflow associated with running Reverse Monte Carlo simulations

    International Nuclear Information System (INIS)

    Dove, Martin T; Rigg, Gary

    2013-01-01

    The Reverse Monte Carlo method enables construction and refinement of large atomic models of materials that are tuned to give best agreement with experimental data such as neutron and x-ray total scattering data, capturing both the average structure and fluctuations. The practical drawback with the current implementations of this approach is the relatively complex workflow required, from setting up the configuration and simulation details through to checking the final outputs and analysing the resultant configurations. In order to make this workflow more accessible to users, we have developed an end-to-end workflow wrapped within a graphical user interface—RMCgui—designed to make the Reverse Monte Carlo more widely accessible. (paper)

  14. Validation of Monte Carlo event generators in the ATLAS Collaboration for LHC Run 2

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note reviews the main steps followed by the ATLAS Collaboration to validate the properties of particle-level simulated events from Monte Carlo event generators in order to ensure the correctness of all event generator configurations and production samples used in physics analyses. A central validation procedure is adopted which permits the continual validation of the functionality and the performance of the ATLAS event simulation infrastructure. Revisions and updates of the Monte Carlo event generators are also monitored. The methodology behind the validation and tools developed for that purpose, as well as various usage cases, are presented. The strategy has proven to play an essential role in identifying possible problems or unwanted features within a restricted timescale, verifying their origin and pointing to possible bug fixes before full-scale processing is initiated.

  15. Monte Carlo Generators for the Production of a $W$ or $Z/\\gamma^*$ Boson in Association with Jets at ATLAS in Run 2

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note documents the Monte Carlo generators used by the ATLAS collaboration at the start of Run 2 for processes where a $W$ or $Z/\\gamma^*$ boson is produced in association with jets. The available event generators are briefly described and comparisons are made with ATLAS measurements of $W$ or $Z/\\gamma^*$+jets performed with Run 1 data, collected at the centre-of-mass energy of 7 TeV. The model predictions are then compared at the Run 2 centre-of-mass energy of 13~TeV. A comparison is also made with an early Run 2 ATLAS $Z/\\gamma^*$+jets data measurement. Investigations into tuning the parameters of the models and evaluating systematic uncertainties on the Monte Carlo predictions are also presented.

  16. Managing the CMS Data and Monte Carlo Processing during LHC Run 2

    Science.gov (United States)

    Wissing, C.; CMS Collaboration

    2017-10-01

    In order to cope with the challenges expected during the LHC Run 2 CMS put in a number of enhancements into the main software packages and the tools used for centrally managed processing. In the presentation we will highlight these improvements that allow CMS to deal with the increased trigger output rate, the increased pileup and the evolution in computing technology. The overall system aims at high flexibility, improved operational flexibility and largely automated procedures. The tight coupling of workflow classes to types of sites has been drastically relaxed. Reliable and high-performing networking between most of the computing sites and the successful deployment of a data-federation allow the execution of workflows using remote data access. That required the development of a largely automatized system to assign workflows and to handle necessary pre-staging of data. Another step towards flexibility has been the introduction of one large global HTCondor Pool for all types of processing workflows and analysis jobs. Besides classical Grid resources also some opportunistic resources as well as Cloud resources have been integrated into that Pool, which gives reach to more than 200k CPU cores.

  17. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method

    International Nuclear Information System (INIS)

    Cacais, F.L.; Delgado, J.U.; Loayza, V.M.

    2016-01-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  18. CMS operations for Run II preparation and commissioning of the offline infrastructure

    CERN Document Server

    Cerminara, Gianluca

    2016-01-01

    The restart of the LHC coincided with an intense activity for the CMS experiment. Both at the beginning of Run II in 2015 and the restart of operations in 2016, the collaboration was engaged in an extensive re-commissioning of the CMS data-taking operations. After the long stop, the detector was fully aligned and calibrated. Data streams were redesigned, to fit the priorities dictated by the physics program for 2015 and 2016. A new reconstruction software (both online and offline) was commissioned with early collisions and further developed during the year. A massive campaign of Monte Carlo production was launched, to assist physics analyses. This presentation reviews the main event of this commissioning journey and describes the status of CMS physics performances for 2016.

  19. Incidence and risk factors of running-related injuries during preparation for a 4-mile recreational running event

    NARCIS (Netherlands)

    Buist, I.; Bredeweg, S. W.; Bessem, B.; van Mechelen, W.; Lemmink, K. A. P. M.; Diercks, R. L.

    Objective In this study, the incidence and the sex-specific predictors of running-related injury (RRI) among a group of recreational runners training for a 4-mile running event were determined and identified, respectively. Design Prospective cohort study. Methods Several potential risk factors were

  20. The PDF4LHC report on PDFs and LHC data: results from Run I and preparation for Run II

    International Nuclear Information System (INIS)

    Rojo, Juan; Accardi, Alberto; Ball, Richard D; Cooper-Sarkar, Amanda; Gwenlan, Claire; Roeck, Albert de; Mangano, Michelangelo; Farry, Stephen; Ferrando, James; Forte, Stefano; Gao, Jun; Harland-Lang, Lucian; Huston, Joey; Glazov, Alexander; Lipka, Katerina; Gouzevitch, Maxime; Lisovyi, Mykhailo; Nadolsky, Pavel

    2015-01-01

    The accurate determination of the parton distribution functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterization and precision Standard Model measurements to new physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarize the information that PDF-sensitive measurements at the LHC have provided so far, and review the prospects for further constraining PDFs with data from the recently started Run II. This document aims to provide useful input to the LHC collaborations to prioritize their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations. (topical review)

  1. The PDF4LHC report on PDFs and LHC data. Results from Run I and preparation for Run II

    International Nuclear Information System (INIS)

    Rojo, Juan; Ball, Richard D.; CERN, Geneva

    2015-07-01

    The accurate determination of the Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterisation and precision Standard Model measurements to New Physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarise the information that PDF-sensitive measurements at the LHC have provided so far, and review the prospects for further constraining PDFs with data from the recently started Run II. This document aims to provide useful input to the LHC collaborations to prioritise their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations.

  2. The PDF4LHC report on PDFs and LHC data: Results from Run I and preparation for Run II

    CERN Document Server

    Rojo, Juan; Ball, Richard D; Cooper-Sarkar, Amanda; de Roeck, Albert; Farry, Stephen; Ferrando, James; Forte, Stefano; Gao, Jun; Harland-Lang, Lucian; Huston, Joey; Glazov, Alexander; Gouzevitch, Maxime; Gwenlan, Claire; Lipka, Katerina; Lisovyi, Mykhailo; Mangano, Michelangelo; Nadolsky, Pavel; Perrozzi, Luca; Placakyte, Ringaile; Radescu, Voica; Salam, Gavin P; Thorne, Robert

    2015-01-01

    The accurate determination of the Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterisation and precision Standard Model measurements to New Physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarise the information that PDF-sensitive measurements at the LHC have provided so far, and review the prospects for further constraining PDFs with data from the recently started Run II. This document aims to provide useful input to the LHC collaborations to prioritise their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations.

  3. Using Static Percentiles of AE9/AP9 to Approximate Dynamic Monte Carlo Runs for Radiation Analysis of Spiral Transfer Orbits

    Science.gov (United States)

    Kwan, Betty P.; O'Brien, T. Paul

    2015-06-01

    The Aerospace Corporation performed a study to determine whether static percentiles of AE9/AP9 can be used to approximate dynamic Monte Carlo runs for radiation analysis of spiral transfer orbits. Solar panel degradation is a major concern for solar-electric propulsion because solar-electric propulsion depends on the power output of the solar panel. Different spiral trajectories have different radiation environments that could lead to solar panel degradation. Because the spiral transfer orbits only last weeks to months, an average environment does not adequately address the possible transient enhancements of the radiation environment that must be accounted for in optimizing the transfer orbit trajectory. Therefore, to optimize the trajectory, an ensemble of Monte Carlo simulations of AE9/AP9 would normally be run for every spiral trajectory to determine the 95th percentile radiation environment. To avoid performing lengthy Monte Carlo dynamic simulations for every candidate spiral trajectory in the optimization, we found a static percentile that would be an accurate representation of the full Monte Carlo simulation for a representative set of spiral trajectories. For 3 LEO to GEO and 1 LEO to MEO trajectories, a static 90th percentile AP9 is a good approximation of the 95th percentile fluence with dynamics for 4-10 MeV protons, and a static 80th percentile AE9 is a good approximation of the 95th percentile fluence with dynamics for 0.5-2 MeV electrons. While the specific percentiles chosen cannot necessarily be used in general for other orbit trade studies, the concept of determining a static percentile as a quick approximation to a full Monte Carlo ensemble of simulations can likely be applied to other orbit trade studies. We expect the static percentile to depend on the region of space traversed, the mission duration, and the radiation effect considered.

  4. Effects of Cycling vs. Running Training on Endurance Performance in Preparation for Inline Speed Skating.

    Science.gov (United States)

    Stangier, Carolin; Abel, Thomas; Hesse, Clemens; Claen, Stephanie; Mierau, Julia; Hollmann, Wildor; Strüder, Heiko K

    2016-06-01

    Winter weather conditions restrict regular sport-specific endurance training in inline speed skating. As a result, this study was designed to compare the effects of cycling and running training programs on inline speed skaters' endurance performance. Sixteen (8 men, 8 women) high-level athletes (mean ± SD 24 ± 8 years) were randomly assigned to 1 of 2 groups (running and cycling). Both groups trained twice a week for 8 weeks, one group on a treadmill and the other on a cycle ergometer. Training intensity and duration was individually calculated (maximal fat oxidation: ∼52% of V[Combining Dot Above]O2peak: 500 kcal per session). Before and after the training intervention, all athletes performed an incremental specific (inline speed skating) and 1 nonspecific (cycling or running) step test according to the group affiliation. In addition to blood lactate concentration, oxygen uptake (V[Combining Dot Above]O2), ventilatory equivalent (VE/V[Combining Dot Above]O2), respiratory exchange ratio (RER), and heart rate were measured. The specific posttest revealed significantly increased absolute V[Combining Dot Above]O2peak values (2.9 ± 0.4, 3.4 ± 0.7, p = 0.01) and submaximal V[Combining Dot Above]O2 values (p ≤ 0.01). VE/V[Combining Dot Above]O2 and RER significantly decreased at maximal (46.6 ± 6.6, 38.5 ± 3.4, p = 0.005; 1.1 ± 0.03, 1.0 ± 0.04, p = 0.001) and submaximal intensities (p ≤ 0.04). None of the analysis revealed a significant group effect (p ≥ 0.15). The results indicate that both cycling vs. running exercise at ∼52% of V[Combining Dot Above]O2peak had a positive effect on the athletes' endurance performance. The increased submaximal V[Combining Dot Above]O2 values indicate a reduction in athletes' inline speed skating technique. Therefore, athletes would benefit from a focus on technique training in the subsequent period.

  5. Preparing for the gypsy moth - design and analysis for stand management Dorr Run, Wayne National Forest

    Science.gov (United States)

    J. J. Colbert; Phil Perry; Bradley Onken

    1997-01-01

    As the advancing front of the gypsy moth continues its spread throughout Ohio, silviculturists on the Wayne National Forest are preparing themselves for potential gypsy moth outbreaks in the coming decade. Through a cooperative effort between the Northeastern Forest Experiment Station and Northeastern Area, Forest Health Protection, the Wayne National Forest, Ohio, is...

  6. GridPP - Preparing for LHC Run 2 and the Wider Context

    Science.gov (United States)

    Coles, Jeremy

    2015-12-01

    This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.

  7. SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi

    International Nuclear Information System (INIS)

    Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S; Schuemann, J; Paganetti, H; Jia, X; Jiang, S

    2014-01-01

    Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm 3 , 0.001 g/cm 3 ) in a 10×10×50 cm 3 water phantom (1 g/cm 3 ). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response

  8. SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi

    Energy Technology Data Exchange (ETDEWEB)

    Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S [Universite catholique de Louvain, Brussels, Brussels (Belgium); Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2014-06-01

    Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called “Fano cavity test”. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.

  9. Herwig: The Evolution of a Monte Carlo Simulation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.

  10. MCBIS2 - Monte-Carlo package for preparing and analyzing experiments with the BIS-2 spectrometer

    International Nuclear Information System (INIS)

    Nowak, H.; Nowak, V.-D.

    1978-01-01

    The MCBIS2 user package is designed to simulate the diffraction dissociation reaction np→K 0 Λp and related background reactions. The MCBIS2 user package is written in JINR for the BIS-2 spectrometer consisting of multiwire proportional chambers, multichannel Cherenkov counter and scintillator hodoscopes. The MCBIS2 user package is divided into three sections: initial, working and final. Each section is a group of subprograms belonging to the corresponding GEANT stage. The generation of all primary vertex kinematics for the reaction np→K 0 Λp and tracking in space is considered in detail. Problems of the preparation of necessary information about detectors are discussed

  11. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method; Validacao da incerteza de pesagens no preparo de padroes de radionuclideos por Metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Cacais, F.L.; Delgado, J.U., E-mail: facacais@gmail.com [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Loayza, V.M. [Instituto Nacional de Metrologia (INMETRO), Rio de Janeiro, RJ (Brazil). Qualidade e Tecnologia

    2016-07-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  12. Running the running

    OpenAIRE

    Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph

    2016-01-01

    We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...

  13. 76 FR 41819 - Notice of Intent To Prepare a Resource Management Plan Amendment for the Glade Run Recreation...

    Science.gov (United States)

    2011-07-15

    ... with American Indian tribes to identify strategies for protecting recognized traditional uses; 11... Office, New Mexico, and Associated Environmental Assessment AGENCY: Bureau of Land Management, Interior... Environmental Assessment (EA) to address recreation and travel management in the Glade Run Recreation Area (the...

  14. Latest LHCf results and preparation to the LHC run for 13 TeV proton–proton interactions

    Directory of Open Access Journals (Sweden)

    Bonechi L.

    2015-01-01

    Full Text Available The LHCf experiment is a CERN experiment dedicated to forward physics which is optimized to measure the neutral particle flow at extreme pseudo-rapidity values, ranging from 8.4 up to infinity. LHCf results are extremely important for the calibration of the hadronic interaction models used for the study of the development of atmospheric showers in the Earth atmosphere. Starting from the recent run of proton-Lead nucleus interactions at LHC, the LHCf and ATLAS collaborations have performed a common data taking which allows a combined study of the central and forward regions of the interaction. The latest results of LHCf, the upgrade of the detectors for the next 6.5 TeV + 6.5 TeV proton–proton run and the status of the LHCf-ATLAS common activities are summarized in this paper.

  15. Commissioning with low-intensity beams helps prepare CMS for this year’s physics run. This event is one of the first low-intensity collisions recorded in the CMS detector, during the early hours of 23 April 2016

    CERN Multimedia

    AUTHOR|(CDS)2068005

    2016-01-01

    Commissioning with low-intensity beams helps prepare CMS for this year’s physics run. This event is one of the first low-intensity collisions recorded in the CMS detector, during the early hours of 23 April 2016

  16. Heat acclimation responses of an ultra-endurance running group preparing for hot desert-based competition.

    Science.gov (United States)

    Costa, Ricardo J S; Crockford, Michael J; Moore, Jonathan P; Walsh, Neil P

    2014-01-01

    Heat acclimation induces adaptations that improve exercise tolerance in hot conditions. Here we report novel findings into the effects of ultra-marathon specific exercise load in increasing hot ambient conditions on indices of heat acclimation. Six male ultra-endurance runners completed a standard pre-acclimation protocol at 20°C ambient temperature (T amb), followed by a heat acclimation protocol consisting of six 2 h running exercise-heat exposures (EH) at 60% VO2max on a motorised treadmill in an environmental chamber. Three EH were performed at 30°C T amb, followed by another three EH at 35°C T amb. EH were separated by 48 h within T amb and 72 h between T amb. Nude body mass (NBM), blood and urine samples were collected pre-exercise; while NBM and urine were collected post-exercise. Rectal temperature (T re), heart rate (HR), thermal comfort rating (TCR) and rating of perceived exertion were measured pre-exercise and monitored every 5 min during exercise. Water was provided ad libitum during exercise. Data were analysed using a repeated measures and one-way analysis of variance (ANOVA), with post hoc Tukey's HSD. Significance was accepted as Pheat acclimation in all ultra-endurance runners. Further, heat acclimation responses occurred with increasing EH to 35°C T amb. Preventing exertional heat illnesses and optimising performance outcomes in ultra-endurance runners may occur with exposure to at least 2 h of exercise-heat stress on at least two occasions in the days leading up to multi-stage ultra-marathon competition in the heat.

  17. Zinc Enolate/Sulfinate Prepared from a Single-Run Reaction Using Zinc Dust with O-Tosylated 4-Hydroxy Coumarin and Pyrone

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Ueon Sang; Joo, Seong-Ryu; Kim, Seung-Hoi [Dankook University, Cheonan (Korea, Republic of)

    2016-07-15

    We demonstrated the preparation of new zinc complexes, 2-oxo-2H-chromen-4-yloxy tosylzinc (I), and 6-methyl-2-oxo-2H-pyran-4-yloxy tosylzinc (II), by the oxidative addition of readily available zinc dust into the corresponding 4-tosylated coumarin (A) and pyrone (B), respectively. Of special interest, the thus-obtained zinc complexes showed an electrophile-dependent reactivity. The subsequent coupling reactions of I and II with a variety of acid chlorides provided the O-acylation product in moderate yields. More interestingly, it should be emphasized that the thus-prepared zinc complexes (I and II) functioned both as zinc enolate and zinc sulfinate, providing C(3)-disubstituted product (b) and sulfone (c), respectively, from a single-run reaction when I or II was treated with benzyl halides. Even though somewhat low yields were achieved under the nonoptimized conditions, the novel zinc complexes present another potential application for zinc reagents. Versatile applications of this discovery are currently underway.

  18. Zinc Enolate/Sulfinate Prepared from a Single-Run Reaction Using Zinc Dust with O-Tosylated 4-Hydroxy Coumarin and Pyrone

    International Nuclear Information System (INIS)

    Shin, Ueon Sang; Joo, Seong-Ryu; Kim, Seung-Hoi

    2016-01-01

    We demonstrated the preparation of new zinc complexes, 2-oxo-2H-chromen-4-yloxy tosylzinc (I), and 6-methyl-2-oxo-2H-pyran-4-yloxy tosylzinc (II), by the oxidative addition of readily available zinc dust into the corresponding 4-tosylated coumarin (A) and pyrone (B), respectively. Of special interest, the thus-obtained zinc complexes showed an electrophile-dependent reactivity. The subsequent coupling reactions of I and II with a variety of acid chlorides provided the O-acylation product in moderate yields. More interestingly, it should be emphasized that the thus-prepared zinc complexes (I and II) functioned both as zinc enolate and zinc sulfinate, providing C(3)-disubstituted product (b) and sulfone (c), respectively, from a single-run reaction when I or II was treated with benzyl halides. Even though somewhat low yields were achieved under the nonoptimized conditions, the novel zinc complexes present another potential application for zinc reagents. Versatile applications of this discovery are currently underway

  19. Homogenized group cross sections by Monte Carlo

    International Nuclear Information System (INIS)

    Van Der Marck, S. C.; Kuijper, J. C.; Oppe, J.

    2006-01-01

    Homogenized group cross sections play a large role in making reactor calculations efficient. Because of this significance, many codes exist that can calculate these cross sections based on certain assumptions. However, the application to the High Flux Reactor (HFR) in Petten, the Netherlands, the limitations of such codes imply that the core calculations would become less accurate when using homogenized group cross sections (HGCS). Therefore we developed a method to calculate HGCS based on a Monte Carlo program, for which we chose MCNP. The implementation involves an addition to MCNP, and a set of small executables to perform suitable averaging after the MCNP run(s) have completed. Here we briefly describe the details of the method, and we report on two tests we performed to show the accuracy of the method and its implementation. By now, this method is routinely used in preparation of the cycle to cycle core calculations for HFR. (authors)

  20. Liquidity Runs

    NARCIS (Netherlands)

    Matta, R.; Perotti, E.

    2016-01-01

    Can the risk of losses upon premature liquidation produce bank runs? We show how a unique run equilibrium driven by asset liquidity risk arises even under minimal fundamental risk. To study the role of illiquidity we introduce realistic norms on bank default, such that mandatory stay is triggered

  1. High throughput and accurate serum proteome profiling by integrated sample preparation technology and single-run data independent mass spectrometry analysis.

    Science.gov (United States)

    Lin, Lin; Zheng, Jiaxin; Yu, Quan; Chen, Wendong; Xing, Jinchun; Chen, Chenxi; Tian, Ruijun

    2018-03-01

    Mass spectrometry (MS)-based serum proteome analysis is extremely challenging due to its high complexity and dynamic range of protein abundances. Developing high throughput and accurate serum proteomic profiling approach capable of analyzing large cohorts is urgently needed for biomarker discovery. Herein, we report a streamlined workflow for fast and accurate proteomic profiling from 1μL of blood serum. The workflow combined an integrated technique for highly sensitive and reproducible sample preparation and a new data-independent acquisition (DIA)-based MS method. Comparing with standard data dependent acquisition (DDA) approach, the optimized DIA method doubled the number of detected peptides and proteins with better reproducibility. Without protein immunodepletion and prefractionation, the single-run DIA analysis enables quantitative profiling of over 300 proteins with 50min gradient time. The quantified proteins span more than five orders of magnitude of abundance range and contain over 50 FDA-approved disease markers. The workflow allowed us to analyze 20 serum samples per day, with about 358 protein groups per sample being identified. A proof-of-concept study on renal cell carcinoma (RCC) serum samples confirmed the feasibility of the workflow for large scale serum proteomic profiling and disease-related biomarker discovery. Blood serum or plasma is the predominant specimen for clinical proteomic studies while the analysis is extremely challenging for its high complexity. Many efforts had been made in the past for serum proteomics for maximizing protein identifications, whereas few have been concerned with throughput and reproducibility. Here, we establish a rapid, robust and high reproducible DIA-based workflow for streamlined serum proteomic profiling from 1μL serum. The workflow doesn't need protein depletion and pre-fractionation, while still being able to detect disease-relevant proteins accurately. The workflow is promising in clinical application

  2. Running Linux

    CERN Document Server

    Dalheimer, Matthias Kalle

    2006-01-01

    The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.

  3. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2013-01-01

    Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...

  4. Running Club

    CERN Multimedia

    Running Club

    2011-01-01

    The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the ‘Cross Interentreprises’, a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...

  5. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2013-01-01

    The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which   took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...

  6. RUN COORDINATION

    CERN Multimedia

    M. Chamizo

    2012-01-01

      On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...

  7. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  8. RUN COORDINATION

    CERN Multimedia

    G. Rakness.

    2013-01-01

    After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...

  9. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Cullen, D.E

    2000-01-01

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files

  10. TART 2000 A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    CERN Document Server

    Cullen, D

    2000-01-01

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  11. Running Club

    CERN Multimedia

    Running Club

    2010-01-01

    The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a “best family” challenge (judged on best parent + best child). Trophies are awarded in the usual men’s, women’s and veterans’ categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...

  12. RUN COORDINATION

    CERN Multimedia

    Christophe Delaere

    2012-01-01

      On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...

  13. RUN COORDINATION

    CERN Multimedia

    C. Delaere

    2012-01-01

      With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...

  14. ATLAS strip detector: Operational Experience and Run1 → Run2 transition

    CERN Document Server

    NAGAI, K; The ATLAS collaboration

    2014-01-01

    The ATLAS SCT operational experience and the detector performance during the RUN1 period of LHC will be reported. Additionally the preparation outward to RUN2 during the long shut down 1 will be mentioned.

  15. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  16. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-02-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  17. Feasibility study for the preparation of a twin-hole disposal configuration test at the Mont Terri URL - MACH-2. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Jobmann, M.; Wolf, J.

    2009-08-15

    The German Federal Government has announced that research and development activities concerning final repositories for high-level waste are to focus on clay formations as host rock in order to investigate alternatives to salt rock which is favoured in the current reference concept. Within the scope of design calculations for a final repository in clay formations, thermohydro-mechanical interaction effects have thus been studied, but only based on numerical calculation. The currently preferred disposal concept is based on the emplacement of heat-generating waste in vertical boreholes with a depth of 50 m maximum. How strongly thermally induced interaction between adjacent emplacement boreholes affects a system of vertical boreholes in claystone has not yet been investigated in-situ. However, these interaction effects need to be considered as in a real repository the emplacement boreholes are drilled successively and, depending on the delivery and necessary cooling-off time of the containers at the interim storage facility, are filled at corresponding intervals. The main goal of the suggested in-situ experiment is to investigate the THM interaction of two adjacent emplacement boreholes that are filled and heated at different times. The project is to be planned and carried out jointly by DBE TECHNOLOGY GmbH and GRS. The Federal Institute for Geosciences and Natural Resources (GBR) has declared its interest in participating. A location for this experiment has been found at the underground research laboratory in Mont Terri, Switzerland, which is located in an Opalinus-clay formation. The project has been presented to the Mont Terri consortium, and a positive vote to carry out the experiment was obtained. The exact location of the experiment has been set within the so-called ''sandy facies'' of the rock lab which is similar to the German part of the Opalinus clay. Since time and costs involved in such a major project cannot be reliably estimated

  18. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  19. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  20. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  1. Monte Carlo simulation of the microcanonical ensemble

    International Nuclear Information System (INIS)

    Creutz, M.

    1984-01-01

    We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references

  2. Dr. Sheehan on Running.

    Science.gov (United States)

    Sheehan, George A.

    This book is both a personal and technical account of the experience of running by a heart specialist who began a running program at the age of 45. In its seventeen chapters, there is information presented on the spiritual, psychological, and physiological results of running; treatment of athletic injuries resulting from running; effects of diet…

  3. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  4. ATLAS detector performance in Run1: Calorimeters

    CERN Document Server

    Burghgrave, B; The ATLAS collaboration

    2014-01-01

    ATLAS operated with an excellent efficiency during the Run 1 data taking period, recording respectively in 2011 and 2012 an integrated luminosity of 5.3 fb-1 at √s = 7 TeV and 21.6 fb-1 at √s = 8TeV. The Liquid Argon and Tile Calorimeter contributed to this effort by operating with a good data quality efficiency, improving over the whole Run 1. This poster presents the Run 1 overall status and performance, LS1 works and Preparations for Run 2.

  5. ATLAS Strip Detector: Operational Experience and Run1-> Run2 Transition

    CERN Document Server

    Nagai, Koichi; The ATLAS collaboration

    2014-01-01

    Large hadron collider was operated very successfully during the Run1 and provided a lot of opportunities of physics studies. It currently has a consolidation work toward to the operation at $\\sqrt{s}=14 \\mathrm{TeV}$ in Run2. The ATLAS experiment has achieved excellent performance in Run1 operation, delivering remarkable physics results. The SemiConductor Tracker contributed to the precise measurement of momentum of charged particles. This paper describes the operation experience of the SemiConductor Tracker in Run1 and the preparation toward to the Run2 operation during the LS1.

  6. Running and osteoarthritis.

    Science.gov (United States)

    Willick, Stuart E; Hansen, Pamela A

    2010-07-01

    The overall health benefits of cardiovascular exercise, such as running, are well established. However, it is also well established that in certain circumstances running can lead to overload injuries of muscle, tendon, and bone. In contrast, it has not been established that running leads to degeneration of articular cartilage, which is the hallmark of osteoarthritis. This article reviews the available literature on the association between running and osteoarthritis, with a focus on clinical epidemiologic studies. The preponderance of clinical reports refutes an association between running and osteoarthritis. Copyright 2010 Elsevier Inc. All rights reserved.

  7. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  8. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  9. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  10. Electron run-away

    International Nuclear Information System (INIS)

    Levinson, I.B.

    1975-01-01

    The run-away effect of electrons for the Coulomb scattering has been studied by Dricer, but the question for other scattering mechanisms is not yet studied. Meanwhile, if the scattering is quasielastic, a general criterion for the run-away may be formulated; in this case the run-away influence on the distribution function may also be studied in somewhat general and qualitative manner. (Auth.)

  11. Triathlon: running injuries.

    Science.gov (United States)

    Spiker, Andrea M; Dixit, Sameer; Cosgarea, Andrew J

    2012-12-01

    The running portion of the triathlon represents the final leg of the competition and, by some reports, the most important part in determining a triathlete's overall success. Although most triathletes spend most of their training time on cycling, running injuries are the most common injuries encountered. Common causes of running injuries include overuse, lack of rest, and activities that aggravate biomechanical predisposers of specific injuries. We discuss the running-associated injuries in the hip, knee, lower leg, ankle, and foot of the triathlete, and the causes, presentation, evaluation, and treatment of each.

  12. Monte Carlo lattice program KIM

    International Nuclear Information System (INIS)

    Cupini, E.; De Matteis, A.; Simonini, R.

    1980-01-01

    The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed

  13. Overcoming the "Run" Response

    Science.gov (United States)

    Swanson, Patricia E.

    2013-01-01

    Recent research suggests that it is not simply experiencing anxiety that affects mathematics performance but also how one responds to and regulates that anxiety (Lyons and Beilock 2011). Most people have faced mathematics problems that have triggered their "run response." The issue is not whether one wants to run, but rather…

  14. Overuse injuries in running

    DEFF Research Database (Denmark)

    Larsen, Lars Henrik; Rasmussen, Sten; Jørgensen, Jens Erik

    2016-01-01

    What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence.......What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence....

  15. PRECIS Runs at IITM

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. PRECIS Runs at IITM. Evaluation experiment using LBCs derived from ERA-15 (1979-93). Runs (3 ensembles in each experiment) already completed with LBCs having a length of 30 years each, for. Baseline (1961-90); A2 scenario (2071-2100); B2 scenario ...

  16. The LHCb Run Control

    CERN Document Server

    Alessio, F; Callot, O; Duval, P-Y; Franek, B; Frank, M; Galli, D; Gaspar, C; v Herwijnen, E; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P

    2010-01-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provid...

  17. Automatic fission source convergence criteria for Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Chang Hyo

    2005-01-01

    The Monte Carlo criticality calculations for the multiplication factor and the power distribution in a nuclear system require knowledge of stationary or fundamental-mode fission source distribution (FSD) in the system. Because it is a priori unknown, so-called inactive cycle Monte Carlo (MC) runs are performed to determine it. The inactive cycle MC runs should be continued until the FSD converges to the stationary FSD. Obviously, if one stops them prematurely, the MC calculation results may have biases because the followup active cycles may be run with the non-stationary FSD. Conversely, if one performs the inactive cycle MC runs more than necessary, one is apt to waste computing time because inactive cycle MC runs are used to elicit the fundamental-mode FSD only. In the absence of suitable criteria for terminating the inactive cycle MC runs, one cannot but rely on empiricism in deciding how many inactive cycles one should conduct for a given problem. Depending on the problem, this may introduce biases into Monte Carlo estimates of the parameters one tries to calculate. The purpose of this paper is to present new fission source convergence criteria designed for the automatic termination of inactive cycle MC runs

  18. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  19. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  20. Symmetry in running.

    Science.gov (United States)

    Raibert, M H

    1986-03-14

    Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.

  1. RUNNING INJURY DEVELOPMENT

    DEFF Research Database (Denmark)

    Johansen, Karen Krogh; Hulme, Adam; Damsted, Camma

    2017-01-01

    BACKGROUND: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. PURPOSE: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. METHODS: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...... factors do you believe influence the risk of running injuries?". In response to this question, the athletes and coaches had to click "Yes" or "No" to 19 predefined factors. In addition, they had the possibility to submit a free-text response. RESULTS: A total of 68 athletes and 19 coaches were included...

  2. Running Injury Development

    DEFF Research Database (Denmark)

    Krogh Johansen, Karen; Hulme, Adam; Damsted, Camma

    2017-01-01

    Background: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. Purpose: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. Methods: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: “Which...... factors do you believe influence the risk of running injuries?”. In response to this question, the athletes and coaches had to click “Yes” or “No” to 19 predefined factors. In addition, they had the possibility to submit a free-text response. Results: A total of 68 athletes and 19 coaches were included...

  3. Running Club - Nocturne des Evaux

    CERN Multimedia

    Running club

    2017-01-01

    Les coureurs du CERN sont encore montés sur les plus hautes marches du podium lors de la course interentreprises. Cette course d’équipe qui se déroule de nuit et par équipe de 3 à 4 coureurs est unique dans la région de par son originalité : départ groupé toutes les 30 secondes, les 3 premiers coureurs doivent passer la ligne d’arrivée ensemble. Double victoire pour le running club a la nocturne !!!! 1ère place pour les filles et 22e au classement général; 1ère place pour l'équipe mixte et 4e au général, battant par la même occasion le record de l'épreuve en mixte d'environ 1 minute; 10e place pour l'équipe homme. Retrouvez tous les résultats sur http://www.chp-geneve.ch/web-cms/index.php/nocturne-des-evaux

  4. The LHCb Run Control

    Energy Technology Data Exchange (ETDEWEB)

    Alessio, F; Barandela, M C; Frank, M; Gaspar, C; Herwijnen, E v; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P [CERN, 1211 Geneva 23 (Switzerland); Callot, O [LAL, IN2P3/CNRS and Universite Paris 11, Orsay (France); Duval, P-Y [Centre de Physique des Particules de Marseille, Aix-Marseille Universite, CNRS/IN2P3, Marseille (France); Franek, B [Rutherford Appleton Laboratory, Chilton, Didcot, OX11 0QX (United Kingdom); Galli, D, E-mail: Clara.Gaspar@cern.c [Universita di Bologna and INFN, Bologna (Italy)

    2010-04-01

    LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented

  5. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  6. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  7. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  8. Neutrino astronomy at Mont Blanc: from LSD to LSD-2

    International Nuclear Information System (INIS)

    Saavedra, O.; Aglietta, M.; Badino, G.

    1988-01-01

    In this paper we present the upgrading of the LSD experiment, presently running in the Mont Blanc Laboratory. The data recorded during the period when supernova 1987A exploded are analysed in detail. The research program of LSD-2, the same experiment as LSD but with an higher sensitivity to search for neutrino burst from collapsing stars, is also discussed

  9. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  10. Running Boot Camp

    CERN Document Server

    Toporek, Chuck

    2008-01-01

    When Steve Jobs jumped on stage at Macworld San Francisco 2006 and announced the new Intel-based Macs, the question wasn't if, but when someone would figure out a hack to get Windows XP running on these new "Mactels." Enter Boot Camp, a new system utility that helps you partition and install Windows XP on your Intel Mac. Boot Camp does all the heavy lifting for you. You won't need to open the Terminal and hack on system files or wave a chicken bone over your iMac to get XP running. This free program makes it easy for anyone to turn their Mac into a dual-boot Windows/OS X machine. Running Bo

  11. Fermilab DART run control

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1996-01-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the control and monitoring of the data acquisition systems. The authors discuss the unique and interesting concepts of the run control and some of the experiences in developing it. They also give a brief update and status of the whole DART system

  12. Fermilab DART run control

    International Nuclear Information System (INIS)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-05-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system

  13. 'Outrunning' the running ear

    African Journals Online (AJOL)

    Chantel

    In even the most experienced hands, an adequate physical examination of the ears can be difficult to perform because of common problems such as cerumen blockage of the auditory canal, an unco- operative toddler or an exasperated parent. The most common cause for a running ear in a child is acute purulent otitis.

  14. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  15. Monte Carlo method for array criticality calculations

    International Nuclear Information System (INIS)

    Dickinson, D.; Whitesides, G.E.

    1976-01-01

    The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced

  16. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  17. Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.

    1998-01-01

    A code package consisting of the Monte Carlo Library MCLIB, the executing code MC RUN, the web application MC Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown

  18. Cost effective distributed computing for Monte Carlo radiation dosimetry

    International Nuclear Information System (INIS)

    Wise, K.N.; Webb, D.V.

    2000-01-01

    Full text: An inexpensive computing facility has been established for performing repetitive Monte Carlo simulations with the BEAM and EGS4/EGSnrc codes of linear accelerator beams, for calculating effective dose from diagnostic imaging procedures and of ion chambers and phantoms used for the Australian high energy absorbed dose standards. The facility currently consists of 3 dual-processor 450 MHz processor PCs linked by a high speed LAN. The 3 PCs can be accessed either locally from a single keyboard/monitor/mouse combination using a SwitchView controller or remotely via a computer network from PCs with suitable communications software (e.g. Telnet, Kermit etc). All 3 PCs are identically configured to have the Red Hat Linux 6.0 operating system. A Fortran compiler and the BEAM and EGS4/EGSnrc codes are available on the 3 PCs. The preparation of sequences of jobs utilising the Monte Carlo codes is simplified using load-distributing software (enFuzion 6.0 marketed by TurboLinux Inc, formerly Cluster from Active Tools) which efficiently distributes the computing load amongst all 6 processors. We describe 3 applications of the system - (a) energy spectra from radiotherapy sources, (b) mean mass-energy absorption coefficients and stopping powers for absolute absorbed dose standards and (c) dosimetry for diagnostic procedures; (a) and (b) are based on the transport codes BEAM and FLURZnrc while (c) is a Fortran/EGS code developed at ARPANSA. Efficiency gains ranged from 3 for (c) to close to the theoretical maximum of 6 for (a) and (b), with the gain depending on the amount of 'bookkeeping' to begin each task and the time taken to complete a single task. We have found the use of a load-balancing batch processing system with many PCs to be an economical way of achieving greater productivity for Monte Carlo calculations or of any computer intensive task requiring many runs with different parameters. Copyright (2000) Australasian College of Physical Scientists and

  19. FERMILAB: Preparing to collide

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Against the background of stringent Environment, Safety and Health (ES&H) regulations mandated by the US Department of Energy for all national Labs, Fermilab prepared to mount the next major Tevatron proton-antiproton collider run

  20. Vectorization of phase space Monte Carlo code in FACOM vector processor VP-200

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1986-01-01

    This paper describes the vectorization techniques for Monte Carlo codes in Fujitsu's Vector Processor System. The phase space Monte Carlo code FOWL is selected as a benchmark, and scalar and vector performances are compared. The vectorized kernel Monte Carlo routine which contains heavily nested IF tests runs up to 7.9 times faster in vector mode than in scalar mode. The overall performance improvement of the vectorized FOWL code over the original scalar code reaches 3.3. The results of this study strongly indicate that supercomputer can be a powerful tool for Monte Carlo simulations in high energy physics. (Auth.)

  1. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  2. Running economy and energy cost of running with backpacks.

    Science.gov (United States)

    Scheer, Volker; Cramer, Leoni; Heitkamp, Hans-Christian

    2018-05-02

    Running is a popular recreational activity and additional weight is often carried in backpacks on longer runs. Our aim was to examine running economy and other physiological parameters while running with a 1kg and 3 kg backpack at different submaximal running velocities. 10 male recreational runners (age 25 ± 4.2 years, VO2peak 60.5 ± 3.1 ml·kg-1·min-1) performed runs on a motorized treadmill of 5 minutes durations at three different submaximal speeds of 70, 80 and 90% of anaerobic lactate threshold (LT) without additional weight, and carrying a 1kg and 3 kg backpack. Oxygen consumption, heart rate, lactate and RPE were measured and analysed. Oxygen consumption, energy cost of running and heart rate increased significantly while running with a backpack weighing 3kg compared to running without additional weight at 80% of speed at lactate threshold (sLT) (p=0.026, p=0.009 and p=0.003) and at 90% sLT (p<0.001, p=0.001 and p=0.001). Running with a 1kg backpack showed a significant increase in heart rate at 80% sLT (p=0.008) and a significant increase in oxygen consumption and heart rate at 90% sLT (p=0.045 and p=0.007) compared to running without additional weight. While running at 70% sLT running economy and cardiovascular effort increased with weighted backpack running compared to running without additional weight, however these increases did not reach statistical significance. Running economy deteriorates and cardiovascular effort increases while running with additional backpack weight especially at higher submaximal running speeds. Backpack weight should therefore be kept to a minimum.

  3. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  4. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  5. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Gerstl, S.A.W.

    1979-05-01

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  6. Fundamentals of Monte Carlo

    International Nuclear Information System (INIS)

    Wollaber, Allan Benton

    2016-01-01

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  7. Microcanonical Monte Carlo

    International Nuclear Information System (INIS)

    Creutz, M.

    1986-01-01

    The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena

  8. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  9. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  10. Ubuntu Up and Running

    CERN Document Server

    Nixon, Robin

    2010-01-01

    Ubuntu for everyone! This popular Linux-based operating system is perfect for people with little technical background. It's simple to install, and easy to use -- with a strong focus on security. Ubuntu: Up and Running shows you the ins and outs of this system with a complete hands-on tour. You'll learn how Ubuntu works, how to quickly configure and maintain Ubuntu 10.04, and how to use this unique operating system for networking, business, and home entertainment. This book includes a DVD with the complete Ubuntu system and several specialized editions -- including the Mythbuntu multimedia re

  11. Monte Carlo calculations of electron transport on microcomputers

    International Nuclear Information System (INIS)

    Chung, Manho; Jester, W.A.; Levine, S.H.; Foderaro, A.H.

    1990-01-01

    In the work described in this paper, the Monte Carlo program ZEBRA, developed by Berber and Buxton, was converted to run on the Macintosh computer using Microsoft BASIC to reduce the cost of Monte Carlo calculations using microcomputers. Then the Eltran2 program was transferred to an IBM-compatible computer. Turbo BASIC and Microsoft Quick BASIC have been used on the IBM-compatible Tandy 4000SX computer. The paper shows the running speed of the Monte Carlo programs on the different computers, normalized to one for Eltran2 on the Macintosh-SE or Macintosh-Plus computer. Higher values refer to faster running times proportionally. Since Eltran2 is a one-dimensional program, it calculates energy deposited in a semi-infinite multilayer slab. Eltran2 has been modified to a two-dimensional program called Eltran3 to computer more accurately the case with a point source, a small detector, and a short source-to-detector distance. The running time of Eltran3 is about twice as long as that of Eltran2 for a similar case

  12. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    International Nuclear Information System (INIS)

    Pevey, Ronald E.

    2005-01-01

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL

  13. Alternative implementations of the Monte Carlo power method

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    2002-01-01

    We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency

  14. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  15. Underwater running device

    International Nuclear Information System (INIS)

    Kogure, Sumio; Matsuo, Takashiro; Yoshida, Yoji

    1996-01-01

    An underwater running device for an underwater inspection device for detecting inner surfaces of a reactor or a water vessel has an outer frame and an inner frame, and both of them are connected slidably by an air cylinder and connected rotatably by a shaft. The outer frame has four outer frame legs, and each of the outer frame legs is equipped with a sucker at the top end. The inner frame has four inner frame legs each equipped with a sucker at the top end. The outer frame legs and the inner frame legs are each connected with the outer frame and the inner frame by the air cylinder. The outer and the inner frame legs can be elevated or lowered (or extended or contracted) by the air cylinder. The sucker is connected with a jet pump-type negative pressure generator. The device can run and move by repeating attraction and releasing of the outer frame legs and the inner frame legs alternately while maintaining the posture of the inspection device stably. (I.N.)

  16. ATLAS Run 1 Pythia8 tunes

    CERN Document Server

    The ATLAS collaboration

    2014-01-01

    We present tunes of the Pythia8 Monte~Carlo event generator's parton shower and multiple parton interaction parameters to a range of data observables from ATLAS Run 1. Four new tunes have been constructed, corresponding to the four leading-order parton density functions, CTEQ6L1, MSTW2008LO, NNPDF23LO, and HERAPDF15LO, each simultaneously tuning ten generator parameters. A set of systematic variations is provided for the NNPDF tune, based on the eigentune method. These tunes improve the modeling of observables that can be described by leading-order + parton shower simulation, and are primarily intended for use in situations where next-to-leading-order and/or multileg parton-showered simulations are unavailable or impractical.

  17. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  18. The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.

    Science.gov (United States)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten

    2016-04-23

    Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing

  19. Neutrino oscillation parameter sampling with MonteCUBES

    Science.gov (United States)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  20. Monts Jura Jazz Festival

    CERN Multimedia

    Jazz Club

    2012-01-01

    The 5th edition of the "Monts Jura Jazz Festival" that will take place on September 21st and 22nd 2012 at the Esplanade du Lac in Divonne-les-Bains. This festival is organized by the "CERN Jazz Club" with the support of the "CERN Staff Association". This festival is a major musical event in the French/Swiss area and proposes a world class program with jazz artists such as D.Lockwood and D.Reinhardt. More information on http://www.jurajazz.com.

  1. Monts Jura Jazz Festival

    CERN Document Server

    2012-01-01

    The 5th edition of the "Monts Jura Jazz Festival" will take place at the Esplanade du Lac in Divonne-les-Bains, France on September 21 and 22. This festival organized by the CERN Jazz Club and supported by the CERN Staff Association is becoming a major musical event in the Geneva region. International Jazz artists like Didier Lockwood and David Reinhardt are part of this year outstanding program. Full program and e-tickets are available on the festival website. Don't miss this great festival!

  2. Barefoot running: biomechanics and implications for running injuries.

    Science.gov (United States)

    Altman, Allison R; Davis, Irene S

    2012-01-01

    Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.

  3. Similar Running Economy With Different Running Patterns Along the Aerial-Terrestrial Continuum.

    Science.gov (United States)

    Lussiana, Thibault; Gindre, Cyrille; Hébert-Losier, Kim; Sagawa, Yoshimasa; Gimenez, Philippe; Mourot, Laurent

    2017-04-01

    No unique or ideal running pattern is the most economical for all runners. Classifying the global running patterns of individuals into 2 categories (aerial and terrestrial) using the Volodalen method could permit a better understanding of the relationship between running economy (RE) and biomechanics. The main purpose was to compare the RE of aerial and terrestrial runners. Two coaches classified 58 runners into aerial (n = 29) or terrestrial (n = 29) running patterns on the basis of visual observations. RE, muscle activity, kinematics, and spatiotemporal parameters of both groups were measured during a 5-min run at 12 km/h on a treadmill. Maximal oxygen uptake (V̇O 2 max) and peak treadmill speed (PTS) were assessed during an incremental running test. No differences were observed between aerial and terrestrial patterns for RE, V̇O 2 max, and PTS. However, at 12 km/h, aerial runners exhibited earlier gastrocnemius lateralis activation in preparation for contact, less dorsiflexion at ground contact, higher coactivation indexes, and greater leg stiffness during stance phase than terrestrial runners. Terrestrial runners had more pronounced semitendinosus activation at the start and end of the running cycle, shorter flight time, greater leg compression, and a more rear-foot strike. Different running patterns were associated with similar RE. Aerial runners appear to rely more on elastic energy utilization with a rapid eccentric-concentric coupling time, whereas terrestrial runners appear to propel the body more forward rather than upward to limit work against gravity. Excluding runners with a mixed running pattern from analyses did not affect study interpretation.

  4. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.

    1995-09-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

  5. Monte Carlo simulations on a 9-node PC cluster

    International Nuclear Information System (INIS)

    Gouriou, J.

    2001-01-01

    Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated. (orig.)

  6. Mesh-based weight window approach for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Liu, L.; Gardner, R.P.

    1997-01-01

    The Monte Carlo method has been increasingly used to solve particle transport problems. Statistical fluctuation from random sampling is the major limiting factor of its application. To obtain the desired precision, variance reduction techniques are indispensable for most practical problems. Among various variance reduction techniques, the weight window method proves to be one of the most general, powerful, and robust. The method is implemented in the current MCNP code. An importance map is estimated during a regular Monte Carlo run, and then the map is used in the subsequent run for splitting and Russian roulette games. The major drawback of this weight window method is lack of user-friendliness. It normally requires that users divide the large geometric cells into smaller ones by introducing additional surfaces to ensure an acceptable spatial resolution of the importance map. In this paper, we present a new weight window approach to overcome this drawback

  7. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1995-01-01

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC RUN) which use the library are shown as an example

  8. Monte Carlo calculation with unquenched Wilson-Fermions

    International Nuclear Information System (INIS)

    Montvay, I.

    1984-01-01

    A Monte Carlo updating procedure taking into account the virtual quark loops is described. It is based on high order hopping parameter expansion of the quark determinant for Wilson-fermions. In a first test run Wilson-loop expectation values are measured on 6 4 lattice at β=5.70 using 16sup(th) order hopping parameter expansion for the quark determinant. (orig.)

  9. ATLAS simulation of boson plus jets processes in Run 2

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    This note describes the ATLAS simulation setup used to model the production of single electroweak bosons ($W$, $Z\\gamma^\\ast$ and prompt $\\gamma$) in association with jets in proton--proton collisions at centre-of-mass energies of 8 and 13 TeV. Several Monte Carlo generator predictions are compared in regions of phase space relevant for data analyses during the LHC Run-2, or compared to unfolded data distributions measured in previous Run-1 or early Run-2 ATLAS analyses. Comparisons are made for regions of phase space with or without additional requirements on the heavy-flavour content of the accompanying jets, as well as electroweak $Vjj$ production processes. Both higher-order corrections and systematic uncertainties are also discussed.

  10. Darlington up and running

    International Nuclear Information System (INIS)

    Show, Don

    1993-01-01

    We've built some of the largest and most successful generating stations in the world. Nonetheless, we cannot take our knowledge and understanding of the technology for granted. Although, I do believe that we are getting better, building safer, more efficient plants, and introducing significant improvements to our existing stations. Ontario Hydro is a large and technically rich organization. Even so, we realize that partnerships with others in the industry are absolutely vital. I am thinking particularly of Atomic Energy of Canada Limited. We enjoy a very close relationship with Aecl, and their support was never more important than during the N/A Investigations. In recent years, we've strengthened our relationship with Aecl considerably. For example, we recently signed an agreement with Aecl, making available all of the Darlington 900 MW e design. Much of the cooperation between Ontario Hydro and Aecl occurs through the CANDU Engineering Authority and the CANDU Owners Group (CO G). These organizations are helping both of US to greatly improve cooperation and efficiency, and they are helping ensure we get the biggest return on our CANDU investments. CO G also provides an important information network which links CANDU operators in Canada, here in Korea, Argentina, India, Pakistan and Romania. In many respects, it is helping to develop the strong partnerships to support CANDU technology worldwide. We all benefit in the long run form sharing information and resources

  11. Backward running or absence of running from Creutz ratios

    International Nuclear Information System (INIS)

    Giedt, Joel; Weinberg, Evan

    2011-01-01

    We extract the running coupling based on Creutz ratios in SU(2) lattice gauge theory with two Dirac fermions in the adjoint representation. Depending on how the extrapolation to zero fermion mass is performed, either backward running or an absence of running is observed at strong bare coupling. This behavior is consistent with other findings which indicate that this theory has an infrared fixed point.

  12. Physiological demands of running during long distance runs and triathlons.

    Science.gov (United States)

    Hausswirth, C; Lehénaff, D

    2001-01-01

    The aim of this review article is to identify the main metabolic factors which have an influence on the energy cost of running (Cr) during prolonged exercise runs and triathlons. This article proposes a physiological comparison of these 2 exercises and the relationship between running economy and performance. Many terms are used as the equivalent of 'running economy' such as 'oxygen cost', 'metabolic cost', 'energy cost of running', and 'oxygen consumption'. It has been suggested that these expressions may be defined by the rate of oxygen uptake (VO2) at a steady state (i.e. between 60 to 90% of maximal VO2) at a submaximal running speed. Endurance events such as triathlon or marathon running are known to modify biological constants of athletes and should have an influence on their running efficiency. The Cr appears to contribute to the variation found in distance running performance among runners of homogeneous level. This has been shown to be important in sports performance, especially in events like long distance running. In addition, many factors are known or hypothesised to influence Cr such as environmental conditions, participant specificity, and metabolic modifications (e.g. training status, fatigue). The decrease in running economy during a triathlon and/or a marathon could be largely linked to physiological factors such as the enhancement of core temperature and a lack of fluid balance. Moreover, the increase in circulating free fatty acids and glycerol at the end of these long exercise durations bear witness to the decrease in Cr values. The combination of these factors alters the Cr during exercise and hence could modify the athlete's performance in triathlons or a prolonged run.

  13. MONTE and ANAL1

    International Nuclear Information System (INIS)

    Lupton, L.R.; Keller, N.A.

    1982-09-01

    The design of a positron emission tomography (PET) ring camera involves trade-offs between such things as sensitivity, resolution and cost. As a design aid, a Monte Carlo simulation of a single-ring camera system has been developed. The model includes a source-filled phantom, collimators, detectors, and optional shadow shields and inter-crystal septa. Individual gamma rays are tracked within the system materials until they escape, are absorbed, or are detected. Compton and photelectric interactions are modelled. All system dimensions are variable within the computation. Coincidence and singles data are recorded according to type (true or scattered), annihilation origin, and detected energy. Photon fluxes at various points of interest, such as the edge of the phantom and the collimator, are available. This report reviews the basics of PET, describes the physics involved in the simulation, and provides detailed outlines of the routines

  14. Frost in Charitum Montes

    Science.gov (United States)

    2003-01-01

    MGS MOC Release No. MOC2-387, 10 June 2003This is a Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle view of the Charitum Montes, south of Argyre Planitia, in early June 2003. The seasonal south polar frost cap, composed of carbon dioxide, has been retreating southward through this area since spring began a month ago. The bright features toward the bottom of this picture are surfaces covered by frost. The picture is located near 57oS, 43oW. North is at the top, south is at the bottom. Sunlight illuminates the scene from the upper left. The area shown is about 217 km (135 miles) wide.

  15. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  16. Voluntary Wheel Running in Mice.

    Science.gov (United States)

    Goh, Jorming; Ladiges, Warren

    2015-12-02

    Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.

  17. Effective action and brane running

    International Nuclear Information System (INIS)

    Brevik, Iver; Ghoroku, Kazuo; Yahiro, Masanobu

    2004-01-01

    We address the renormalized effective action for a Randall-Sundrum brane running in 5D bulk space. The running behavior of the brane action is obtained by shifting the brane position without changing the background and fluctuations. After an appropriate renormalization, we obtain an effective, low energy brane world action, in which the effective 4D Planck mass is independent of the running position. We address some implications for this effective action

  18. Asymmetric information and bank runs

    OpenAIRE

    Gu, Chao

    2007-01-01

    It is known that sunspots can trigger panic-based bank runs and that the optimal banking contract can tolerate panic-based runs. The existing literature assumes that these sunspots are based on a publicly observed extrinsic randomizing device. In this paper, I extend the analysis of panic-based runs to include an asymmetric-information, extrinsic randomizing device. Depositors observe different, but correlated, signals on the stability of the bank. I find that if the signals that depositors o...

  19. How to run 100 meters ?

    OpenAIRE

    Aftalion, Amandine

    2016-01-01

    A paraitre dans SIAP; The aim of this paper is to bring a mathematical justification to the optimal way of organizing one's effort when running. It is well known from physiologists that all running exercises of duration less than 3mn are run with a strong initial acceleration and a decelerating end; on the contrary, long races are run with a final sprint. This can be explained using a mathematical model describing the evolution of the velocity, the anaerobic energy, and the propulsive force: ...

  20. A Running Start: Resource Guide for Youth Running Programs

    Science.gov (United States)

    Jenny, Seth; Becker, Andrew; Armstrong, Tess

    2016-01-01

    The lack of physical activity is an epidemic problem among American youth today. In order to combat this, many schools are incorporating youth running programs as a part of their comprehensive school physical activity programs. These youth running programs are being implemented before or after school, at school during recess at the elementary…

  1. Changes in Running Mechanics During a 6-Hour Running Race.

    Science.gov (United States)

    Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano

    2017-05-01

    To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.

  2. CDF run II run control and online monitor

    International Nuclear Information System (INIS)

    Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.

    2001-01-01

    The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs

  3. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Univ. of New Mexico, Albuquerque, NM

    2016-01-01

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  4. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  5. Perturbation based Monte Carlo criticality search in density, enrichment and concentration

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Deng, Jingkang

    2015-01-01

    Highlights: • A new perturbation based Monte Carlo criticality search method is proposed. • The method could get accurate results with only one individual criticality run. • The method is used to solve density, enrichment and concentration search problems. • Results show the feasibility and good performances of this method. • The relationship between results’ accuracy and perturbation order is discussed. - Abstract: Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Existing Monte Carlo criticality search methods need large amount of individual criticality runs and may have unstable results because of the uncertainties of criticality results. In this paper, a new perturbation based Monte Carlo criticality search method is proposed and discussed. This method only needs one individual criticality calculation with perturbation tallies to estimate k eff changing function using initial k eff and differential coefficients results, and solves polynomial equations to get the criticality search results. The new perturbation based Monte Carlo criticality search method is implemented in the Monte Carlo code RMC, and criticality search problems in density, enrichment and concentration are taken out. Results show that this method is quite inspiring in accuracy and efficiency, and has advantages compared with other criticality search methods

  6. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  7. CMS Software and Computing Ready for Run 2

    CERN Document Server

    Bloom, Kenneth

    2015-01-01

    In Run 1 of the Large Hadron Collider, software and computing was a strategic strength of the Compact Muon Solenoid experiment. The timely processing of data and simulation samples and the excellent performance of the reconstruction algorithms played an important role in the preparation of the full suite of searches used for the observation of the Higgs boson in 2012. In Run 2, the LHC will run at higher intensities and CMS will record data at a higher trigger rate. These new running conditions will provide new challenges for the software and computing systems. Over the two years of Long Shutdown 1, CMS has built upon the successes of Run 1 to improve the software and computing to meet these challenges. In this presentation we will describe the new features in software and computing that will once again put CMS in a position of physics leadership.

  8. Monte Carlo Production Management at CMS

    CERN Document Server

    Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni

    2015-01-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...

  9. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  10. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  11. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  12. Geology of Maxwell Montes, Venus

    Science.gov (United States)

    Head, J. W.; Campbell, D. B.; Peterfreund, A. R.; Zisk, S. A.

    1984-01-01

    Maxwell Montes represent the most distinctive topography on the surface of Venus, rising some 11 km above mean planetary radius. The multiple data sets of the Pioneer missing and Earth based radar observations to characterize Maxwell Montes are analyzed. Maxwell Montes is a porkchop shaped feature located at the eastern end of Lakshmi Planum. The main massif trends about North 20 deg West for approximately 1000 km and the narrow handle extends several hundred km West South-West WSW from the north end of the main massif, descending down toward Lakshmi Planum. The main massif is rectilinear and approximately 500 km wide. The southern and northern edges of Maxwell Montes coincide with major topographic boundaries defining the edge of Ishtar Terra.

  13. Measurement of top-quark pair production cross sections and calibration of the top-quark Monte-Carlo mass using LHC run I proton-proton collision data at √(s) = 7 and 8 TeV with the CMS experiment

    International Nuclear Information System (INIS)

    Kieseler, Jan

    2015-12-01

    In this thesis, measurements of the production cross sections for top-quark pairs and the determination of the top-quark mass are presented. Dileptonic decays of top-quark pairs (t anti t) with two opposite-charged lepton (electron and muon) candidates in the final state are considered. The studied data samples are collected in proton-proton collisions at the CERN Large Hadron Collider with the CMS detector and correspond to integrated luminosities of 5.0 fb -1 and 19.7 fb -1 at center-of-mass energies of √(s) = 7 TeV and √(s) = 8 TeV, respectively. The cross sections, σ t anti t , are measured in the fiducial detector volume (visible phase space), defined by the kinematics of the top-quark decay products, and are extrapolated to the full phase space. The visible cross sections are extracted in a simultaneous binned-likelihood fit to multi-differential distributions of final-state observables, categorized according to the multiplicity of jets associated to b quarks (b jets) and other jets in each event. The fit is performed with emphasis on a consistent treatment of correlations between systematic uncertainties and taking into account features of the t anti t event topology. By comparison with predictions from the Standard Model at next-to-next-to leading order (NNLO) accuracy, the top-quark pole mass, m t pole , is extracted from the measured cross sections for different state-of-the-art PDF sets. Furthermore, the top-quark mass parameter used in Monte-Carlo simulations, m t MC , is determined using the distribution of the invariant mass of a lepton candidate and the leading b jet in the event, m lb . Being defined by the kinematics of the top-quark decay, this observable is unaffected by the description of the top-quark production mechanism. Events are selected from the data collected at √(s) = 8 TeV that contain at least two jets and one b jet in addition to the lepton candidate pair. A novel technique is presented, in which fixed-order calculations in

  14. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  15. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  16. Running continuous academic adoption programmes

    DEFF Research Database (Denmark)

    Nielsen, Tobias Alsted

    Running successful academic adoption programmes requires executive support, clear strategies, tactical resources and organisational agility. These two presentations will discuss the implementation of strategic academic adoption programs down to very concrete tool customisations to meet specific...

  17. Turkey Run Landfill Emissions Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....

  18. Phthalate SHEDS-HT runs

    Data.gov (United States)

    U.S. Environmental Protection Agency — Inputs and outputs for SHEDS-HT runs of DiNP, DEHP, DBP. This dataset is associated with the following publication: Moreau, M., J. Leonard, K. Phillips, J. Campbell,...

  19. ATLAS Distributed Computing in LHC Run2

    International Nuclear Information System (INIS)

    Campana, Simone

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented. (paper)

  20. Weighted-delta-tracking for Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Morgan, L.W.G.; Kotlyar, D.

    2015-01-01

    Highlights: • This paper presents an alteration to the Monte Carlo Woodcock tracking technique. • The alteration improves computational efficiency within regions of high absorbers. • The rejection technique is replaced by a statistical weighting mechanism. • The modified Woodcock method is shown to be faster than standard Woodcock tracking. • The modified Woodcock method achieves a lower variance, given a specified accuracy. - Abstract: Monte Carlo particle transport (MCPT) codes are incredibly powerful and versatile tools to simulate particle behavior in a multitude of scenarios, such as core/criticality studies, radiation protection, shielding, medicine and fusion research to name just a small subset applications. However, MCPT codes can be very computationally expensive to run when the model geometry contains large attenuation depths and/or contains many components. This paper proposes a simple modification to the Woodcock tracking method used by some Monte Carlo particle transport codes. The Woodcock method utilizes the rejection method for sampling virtual collisions as a method to remove collision distance sampling at material boundaries. However, it suffers from poor computational efficiency when the sample acceptance rate is low. The proposed method removes rejection sampling from the Woodcock method in favor of a statistical weighting scheme, which improves the computational efficiency of a Monte Carlo particle tracking code. It is shown that the modified Woodcock method is less computationally expensive than standard ray-tracing and rejection-based Woodcock tracking methods and achieves a lower variance, given a specified accuracy

  1. PDF4LHC recommendations for LHC Run II

    CERN Document Server

    Butterworth, Jon; Cooper-Sarkar, Amanda; De Roeck, Albert; Feltesse, Joel; Forte, Stefano; Gao, Jun; Glazov, Sasha; Huston, Joey; Kassabov, Zahari; McNulty, Ronan; Morsch, Andreas; Nadolsky, Pavel; Radescu, Voica; Rojo, Juan; Thorne, Robert

    2016-01-01

    We provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+$\\alpha_s$ uncertainties suitable for applications at the LHC Run II. We review developments since the previous PDF4LHC recommendation, and discuss and compare the new generation of PDFs, which include substantial information from experimental data from the Run I of the LHC. We then propose a new prescription for the combination of a suitable subset of the available PDF sets, which is presented in terms of a single combined PDF set. We finally discuss tools which allow for the delivery of this combined set in terms of optimized sets of Hessian eigenvectors or Monte Carlo replicas, and their usage, and provide some examples of their application to LHC phenomenology.

  2. LHCb computing in Run II and its evolution towards Run III

    CERN Document Server

    Falabella, Antonio

    2016-01-01

    his contribution reports on the experience of the LHCb computing team during LHC Run 2 and its preparation for Run 3. Furthermore a brief introduction on LHCbDIRAC, i.e. the tool to interface to the experiment distributed computing resources for its data processing and data management operations, is given. Run 2, which started in 2015, has already seen several changes in the data processing workflows of the experiment. Most notably the ability to align and calibrate the detector between two different stages of the data processing in the high level trigger farm, eliminating the need for a second pass processing of the data offline. In addition a fraction of the data is immediately reconstructed to its final physics format in the high level trigger and only this format is exported from the experiment site to the physics analysis. This concept have successfully been tested and will continue to be used for the rest of Run 2. Furthermore the distributed data processing has been improved with new concepts and techn...

  3. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  4. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  5. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    CERN Document Server

    Chapman, J; Duehrssen, M; Elsing, M; Froidevaux, D; Harrington, R; Jansky, R; Langenberg, R; Mandrysch, R; Marshall, Z; Ritsch, E; Salzburger, A

    2014-01-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during run I relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for run II, and beyond. A number of fast detector simulation, digitization and reconstruction techniques and are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  6. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    Science.gov (United States)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  7. Monte Carlo simulation of experiments

    International Nuclear Information System (INIS)

    Opat, G.I.

    1977-07-01

    An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)

  8. Influence of the Lower Jaw Position on the Running Pattern.

    Directory of Open Access Journals (Sweden)

    Christian Maurer

    Full Text Available The effects of manipulated dental occlusion on body posture has been investigated quite often and discussed controversially in the literature. Far less attention has been paid to the influence of dental occlusion position on human movement. If human movement was analysed, it was mostly while walking and not while running. This study was therefore designed to identify the effect of lower jaw positions on running behaviour according to different dental occlusion positions.Twenty healthy young recreational runners (mean age = 33.9±5.8 years participated in this study. Kinematic data were collected using an eight-camera Vicon motion capture system (VICON Motion Systems, Oxford, UK. Subjects were consecutively prepared with four different dental occlusion conditions in random order and performed five running trials per test condition on a level walkway with their preferred running shoes. Vector based pattern recognition methods, in particular cluster analysis and support vector machines (SVM were used for movement pattern identification.Subjects exhibited unique movement patterns leading to 18 clusters for the 20 subjects. No overall classification of the splint condition could be observed. Within individual subjects different running patterns could be identified for the four splint conditions. The splint conditions lead to a more symmetrical running pattern than the control condition.The influence of an occlusal splint on running pattern can be confirmed in this study. Wearing a splint increases the symmetry of the running pattern. A more symmetrical running pattern might help to reduce the risk of injuries or help in performance. The change of the movement pattern between the neutral condition and any of the three splint conditions was significant within subjects but not across subjects. Therefore the dental splint has a measureable influence on the running pattern of subjects, however subjects individuality has to be considered when choosing the

  9. Does a crouched leg posture enhance running stability and robustness?

    Science.gov (United States)

    Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre

    2011-07-21

    Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height

  10. Strategije drevesnega preiskovanja Monte Carlo

    OpenAIRE

    VODOPIVEC, TOM

    2018-01-01

    Po preboju pri igri go so metode drevesnega preiskovanja Monte Carlo (ang. Monte Carlo tree search – MCTS) sprožile bliskovit napredek agentov za igranje iger: raziskovalna skupnost je od takrat razvila veliko variant in izboljšav algoritma MCTS ter s tem zagotovila napredek umetne inteligence ne samo pri igrah, ampak tudi v številnih drugih domenah. Čeprav metode MCTS združujejo splošnost naključnega vzorčenja z natančnostjo drevesnega preiskovanja, imajo lahko v praksi težave s počasno konv...

  11. Application of inactive cycle stopping criteria for Monte Carlo Wielandt calculations

    International Nuclear Information System (INIS)

    Shim, H. J.; Kim, C. H.

    2009-01-01

    The Wielandt method is incorporated into Monte Carlo (MC) eigenvalue calculation as a way to speed up fission source convergence. To make the most of the MC Wielandt method, however, it is highly desirable to halt inactive cycle runs in a timely manner because it requires a much longer computational time to execute a single cycle MC run than the conventional MC eigenvalue calculations. This paper presents an algorithm to detect the onset of the active cycles and thereby to stop automatically the inactive cycle MC runs based on two anterior stopping criteria. The effectiveness of the algorithm is demonstrated by applying it to a slow convergence problem. (authors)

  12. Reconstruction, Energy Calibration, and Identification of Hadronically Decaying Tau Leptons in the ATLAS Experiment for Run-2 of the LHC

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The reconstruction algorithm, energy calibration, and identification methods for hadronically decaying tau leptons in ATLAS used at the start of Run-2 of the Large Hadron Collider are described in this note. All algorithms have been optimised for Run-2 conditions. The energy calibration relies on Monte Carlo samples with hadronic tau lepton decays, and applies multiplicative factors based on the pT of the reconstructed tau lepton to the energy measurements in the calorimeters. The identification employs boosted decision trees. Systematic uncertainties on the energy scale, reconstruction efficiency and identification efficiency of hadronically decaying tau leptons are determined using Monte Carlo samples that simulate varying conditions.

  13. Quantum computational finance: Monte Carlo pricing of financial derivatives

    OpenAIRE

    Rebentrost, Patrick; Gupt, Brajesh; Bromley, Thomas R.

    2018-01-01

    Financial derivatives are contracts that can have a complex payoff dependent upon underlying benchmark assets. In this work, we present a quantum algorithm for the Monte Carlo pricing of financial derivatives. We show how the relevant probability distributions can be prepared in quantum superposition, the payoff functions can be implemented via quantum circuits, and the price of financial derivatives can be extracted via quantum measurements. We show how the amplitude estimation algorithm can...

  14. Load Balancing of Parallel Monte Carlo Transport Calculations

    International Nuclear Information System (INIS)

    Procassini, R J; O'Brien, M J; Taylor, J M

    2005-01-01

    The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since he particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations

  15. Dynamic Load Balancing of Parallel Monte Carlo Transport Calculations

    International Nuclear Information System (INIS)

    O'Brien, M; Taylor, J; Procassini, R

    2004-01-01

    The performance of parallel Monte Carlo transport calculations which use both spatial and particle parallelism is increased by dynamically assigning processors to the most worked domains. Since the particle work load varies over the course of the simulation, this algorithm determines each cycle if dynamic load balancing would speed up the calculation. If load balancing is required, a small number of particle communications are initiated in order to achieve load balance. This method has decreased the parallel run time by more than a factor of three for certain criticality calculations

  16. Entropic sampling in the path integral Monte Carlo method

    International Nuclear Information System (INIS)

    Vorontsov-Velyaminov, P N; Lyubartsev, A P

    2003-01-01

    We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced

  17. Children's Fitness. Managing a Running Program.

    Science.gov (United States)

    Hinkle, J. Scott; Tuckman, Bruce W.

    1987-01-01

    A running program to increase the cardiovascular fitness levels of fourth-, fifth-, and sixth-grade children is described. Discussed are the running environment, implementation of a running program, feedback, and reinforcement. (MT)

  18. Barefoot running survey: Evidence from the field

    OpenAIRE

    David Hryvniak; Jay Dicharry; Robert Wilder

    2014-01-01

    Background: Running is becoming an increasingly popular activity among Americans with over 50 million participants. Running shoe research and technology has continued to advance with no decrease in overall running injury rates. A growing group of runners are making the choice to try the minimal or barefoot running styles of the pre-modern running shoe era. There is some evidence of decreased forces and torques on the lower extremities with barefoot running, but no clear data regarding how thi...

  19. Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations

    International Nuclear Information System (INIS)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I.

    2015-01-01

    A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.

  20. Exact Monte Carlo for molecules

    International Nuclear Information System (INIS)

    Lester, W.A. Jr.; Reynolds, P.J.

    1985-03-01

    A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H 2 , and the singlet-triplet splitting in methylene are presented and discussed. 17 refs

  1. The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE

    Science.gov (United States)

    Vandenbroucke, B.; Wood, K.

    2018-04-01

    We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.

  2. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  3. Red light running camera assessment.

    Science.gov (United States)

    2011-04-01

    In the 2004-2007 period, the Mission Street SE and 25th Street SE intersection in Salem, Oregon showed relatively few crashes attributable to red light running (RLR) but, since a high number of RLR violations were observed, the intersection was ident...

  4. Teaching Bank Runs through Films

    Science.gov (United States)

    Flynn, David T.

    2009-01-01

    The author advocates the use of films to supplement textbook treatments of bank runs and panics in money and banking or general banking classes. Modern students, particularly those in developed countries, tend to be unfamiliar with potential fragilities of financial systems such as a lack of deposit insurance or other safety net mechanisms. Films…

  5. Running and Breathing in Mammals

    Science.gov (United States)

    Bramble, Dennis M.; Carrier, David R.

    1983-01-01

    Mechanical constraints appear to require that locomotion and breathing be synchronized in running mammals. Phase locking of limb and respiratory frequency has now been recorded during treadmill running in jackrabbits and during locomotion on solid ground in dogs, horses, and humans. Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running. Flying birds have independently achieved phase-locked locomotor and respiratory cycles. This hints that strict locomotor-respiratory coupling may be a vital factor in the sustained aerobic exercise of endothermic vertebrates, especially those in which the stresses of locomotion tend to deform the thoracic complex.

  6. Does Addiction Run in Families?

    Science.gov (United States)

    ... Makes Someone More Likely to Get Addicted to Drugs? Does Addiction Run in Families? Why Is It So Hard ... news is that many children whose parents had drug problems don't become addicted when they grow up. The chances of addiction are higher, but it doesn't have to ...

  7. Running codes through the web

    International Nuclear Information System (INIS)

    Clark, R.E.H.

    2001-01-01

    Dr. Clark presented a report and demonstration of running atomic physics codes through the WWW. The atomic physics data is generated from Los Alamos National Laboratory (LANL) codes that calculate electron impact excitation, ionization, photoionization, and autoionization, and inversed processes through detailed balance. Samples of Web interfaces, input and output are given in the report

  8. Interactive three-dimensional visualization and creation of geometries for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Theis, C.; Buchegger, K.H.; Brugger, M.; Forkel-Wirth, D.; Roesler, S.; Vincke, H.

    2006-01-01

    The implementation of three-dimensional geometries for the simulation of radiation transport problems is a very time-consuming task. Each particle transport code supplies its own scripting language and syntax for creating the geometries. All of them are based on the Constructive Solid Geometry scheme requiring textual description. This makes the creation a tedious and error-prone task, which is especially hard to master for novice users. The Monte Carlo code FLUKA comes with built-in support for creating two-dimensional cross-sections through the geometry and FLUKACAD, a custom-built converter to the commercial Computer Aided Design package AutoCAD, exists for 3D visualization. For other codes, like MCNPX, a couple of different tools are available, but they are often specifically tailored to the particle transport code and its approach used for implementing geometries. Complex constructive solid modeling usually requires very fast and expensive special purpose hardware, which is not widely available. In this paper SimpleGeo is presented, which is an implementation of a generic versatile interactive geometry modeler using off-the-shelf hardware. It is running on Windows, with a Linux version currently under preparation. This paper describes its functionality, which allows for rapid interactive visualization as well as generation of three-dimensional geometries, and also discusses critical issues regarding common CAD systems

  9. HEXANN-EVALU - a Monte Carlo program system for pressure vessel neutron irradiation calculation

    International Nuclear Information System (INIS)

    Lux, Ivan

    1983-08-01

    The Monte Carlo program HEXANN and the evaluation program EVALU are intended to calculate Monte Carlo estimates of reaction rates and currents in segments of concentric angular regions around a hexagonal reactor-core region. The report describes the theoretical basis, structure and activity of the programs. Input data preparation guides and a sample problem are also included. Theoretical considerations as well as numerical experimental results suggest the user a nearly optimum way of making use of the Monte Carlo efficiency increasing options included in the program

  10. MMCTP: a radiotherapy research environment for Monte Carlo and patient-specific treatment planning

    International Nuclear Information System (INIS)

    Alexander, A; DeBlois, F; Stroian, G; Al-Yahya, K; Heath, E; Seuntjens, J

    2007-01-01

    Radiotherapy research lacks a flexible computational research environment for Monte Carlo (MC) and patient-specific treatment planning. The purpose of this study was to develop a flexible software package on low-cost hardware with the aim of integrating new patient-specific treatment planning with MC dose calculations suitable for large-scale prospective and retrospective treatment planning studies. We designed the software package 'McGill Monte Carlo treatment planning' (MMCTP) for the research development of MC and patient-specific treatment planning. The MMCTP design consists of a graphical user interface (GUI), which runs on a simple workstation connected through standard secure-shell protocol to a cluster for lengthy MC calculations. Treatment planning information (e.g., images, structures, beam geometry properties and dose distributions) is converted into a convenient MMCTP local file storage format designated, the McGill RT format. MMCTP features include (a) DICOM R T, RTOG and CADPlan CART format imports; (b) 2D and 3D visualization views for images, structure contours, and dose distributions; (c) contouring tools; (d) DVH analysis, and dose matrix comparison tools; (e) external beam editing; (f) MC transport calculation from beam source to patient geometry for photon and electron beams. The MC input files, which are prepared from the beam geometry properties and patient information (e.g., images and structure contours), are uploaded and run on a cluster using shell commands controlled from the MMCTP GUI. The visualization, dose matrix operation and DVH tools offer extensive options for plan analysis and comparison between MC plans and plans imported from commercial treatment planning systems. The MMCTP GUI provides a flexible research platform for the development of patient-specific MC treatment planning for photon and electron external beam radiation therapy. The impact of this tool lies in the fact that it allows for systematic, platform

  11. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  12. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  13. European Decommissioning Academy (EDA) - successful 1. run in june 2015

    International Nuclear Information System (INIS)

    Slugen, V.; Hornacek, M.

    2015-01-01

    Experiences from the first run of the European Decommissioning Academy (EDA) are reported in details. EDA was created at the Slovak University of Technology in Bratislava Slovakia, based on discussion and expressed needs declared at many international meetings including ECED2013. The first run successfully passed 14 participants during 7.-20.6. 2015. Academy was focused on decommissioning issues via lessons, practical exercises in laboratories, on-site training prepared at NPP V-1 in Jaslovske Bohunice, Slovakia as well as 4 days technical tour to other European decommissioning facilities (Swiss, Italy), respectively. Detailed information can be found at http://kome.snus.sk/inpe/. (authors)

  14. Preventing Running Injuries through Barefoot Activity

    Science.gov (United States)

    Hart, Priscilla M.; Smith, Darla R.

    2008-01-01

    Running has become a very popular lifetime physical activity even though there are numerous reports of running injuries. Although common theories have pointed to impact forces and overpronation as the main contributors to chronic running injuries, the increased use of cushioning and orthotics has done little to decrease running injuries. A new…

  15. Running: Improving Form to Reduce Injuries.

    Science.gov (United States)

    2015-08-01

    Running is often perceived as a good option for "getting into shape," with little thought given to the form, or mechanics, of running. However, as many as 79% of all runners will sustain a running-related injury during any given year. If you are a runner-casual or serious-you should be aware that poor running mechanics may contribute to these injuries. A study published in the August 2015 issue of JOSPT reviewed the existing research to determine whether running mechanics could be improved, which could be important in treating running-related injuries and helping injured runners return to pain-free running.

  16. Run-off from roofs

    International Nuclear Information System (INIS)

    Roed, J.

    1985-01-01

    In order to find the run-off from roof material a roof has been constructed with two different slopes (30 deg C and 45 deg C). Beryllium-7 and caesium-137 has been used as tracers. Considering new roof material the pollution removed by runoff processes has been shown to be very different for various roof materials. The pollution is much more easily removed from silicon-treated material than from porous red-tile roof material. Caesium is removed more easily than beryllium. The content of caesium in old roof materials is greater in red-tile than in other less-porous materials. However, the measured removal from new material does not correspond to the amount accumulated in the old. This could be explained by weathering and by saturation effects. This last effect is probably the more important. The measurements on old material indicates a removal of 44-86% of the caesium pollution by run-off, whereas the measurement on new showed a removal of only 31-50%. It has been demonstrated that the pollution concentration in the run-off water could be very different from that in rainwater. The work was part of the EEC Radiation Protection Programme and done under a subcontract with Association Euratom-C.E.A. No. SC-014-BIO-F-423-DK(SD) under contract No. BIO-F-423-81-F. (author)

  17. Better in the long run

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Last week, the Chamonix workshop once again proved its worth as a place where all the stakeholders in the LHC can come together, take difficult decisions and reach a consensus on important issues for the future of particle physics. The most important decision we reached last week is to run the LHC for 18 to 24 months at a collision energy of 7 TeV (3.5 TeV per beam). After that, we’ll go into a long shutdown in which we’ll do all the necessary work to allow us to reach the LHC’s design collision energy of 14 TeV for the next run. This means that when beams go back into the LHC later this month, we’ll be entering the longest phase of accelerator operation in CERN’s history, scheduled to take us into summer or autumn 2011. What led us to this conclusion? Firstly, the LHC is unlike any previous CERN machine. Because it is a cryogenic facility, each run is accompanied by lengthy cool-down and warm-up phases. For that reason, CERN’s traditional &...

  18. LHC Report: Positive ion run!

    CERN Multimedia

    Mike Lamont for the LHC Team

    2011-01-01

    The current LHC ion run has been progressing very well. The first fill with 358 bunches per beam - the maximum number for the year - was on Tuesday, 15 November and was followed by an extended period of steady running. The quality of the beam delivered by the heavy-ion injector chain has been excellent, and this is reflected in both the peak and the integrated luminosity.   The peak luminosity in ATLAS reached 5x1026 cm-2s-1, which is a factor of ~16 more than last year's peak of 3x1025 cm-2s-1. The integrated luminosity in each of ALICE, ATLAS and CMS is now around 100 inverse microbarn, already comfortably over the nominal target for the run. The polarity of the ALICE spectrometer and solenoid magnets was reversed on Monday, 28 November with the aim of delivering another sizeable amount of luminosity in this configuration. On the whole, the LHC has been behaving very well recently, ensuring good machine availability. On Monday evening, however, a faulty level sensor in the cooling towe...

  19. GASIFICATION TEST RUN TC06

    Energy Technology Data Exchange (ETDEWEB)

    Southern Company Services, Inc.

    2003-08-01

    This report discusses test campaign TC06 of the Kellogg Brown & Root, Inc. (KBR) Transport Reactor train with a Siemens Westinghouse Power Corporation (Siemens Westinghouse) particle filter system at the Power Systems Development Facility (PSDF) located in Wilsonville, Alabama. The Transport Reactor is an advanced circulating fluidized-bed reactor designed to operate as either a combustor or a gasifier using a particulate control device (PCD). The Transport Reactor was operated as a pressurized gasifier during TC06. Test run TC06 was started on July 4, 2001, and completed on September 24, 2001, with an interruption in service between July 25, 2001, and August 19, 2001, due to a filter element failure in the PCD caused by abnormal operating conditions while tuning the main air compressor. The reactor temperature was varied between 1,725 and 1,825 F at pressures from 190 to 230 psig. In TC06, 1,214 hours of solid circulation and 1,025 hours of coal feed were attained with 797 hours of coal feed after the filter element failure. Both reactor and PCD operations were stable during the test run with a stable baseline pressure drop. Due to its length and stability, the TC06 test run provided valuable data necessary to analyze long-term reactor operations and to identify necessary modifications to improve equipment and process performance as well as progressing the goal of many thousands of hours of filter element exposure.

  20. Running jobs in the vacuum

    International Nuclear Information System (INIS)

    McNab, A; Stagni, F; Garcia, M Ubeda

    2014-01-01

    We present a model for the operation of computing nodes at a site using Virtual Machines (VMs), in which VMs are created and contextualized for experiments by the site itself. For the experiment, these VMs appear to be produced spontaneously 'in the vacuum' rather having to ask the site to create each one. This model takes advantage of the existing pilot job frameworks adopted by many experiments. In the Vacuum model, the contextualization process starts a job agent within the VM and real jobs are fetched from the central task queue as normal. An implementation of the Vacuum scheme, Vac, is presented in which a VM factory runs on each physical worker node to create and contextualize its set of VMs. With this system, each node's VM factory can decide which experiments' VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site's VM factories query each other to discover which VM types they are running. A property of this system is that there is no gate keeper service, head node, or batch system accepting and then directing jobs to particular worker nodes, avoiding several central points of failure. Finally, we describe tests of the Vac system using jobs from the central LHCb task queue, using the same contextualization procedure for VMs developed by LHCb for Clouds.

  1. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  2. Monte Carlo Methods in ICF

    Science.gov (United States)

    Zimmerman, George B.

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  3. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, G.B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics

  4. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, George B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials

  5. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.; Dean, D.J.; Langanke, K.

    1997-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; the resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo (SMMC) methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, the thermal and rotational behavior of rare-earth and γ-soft nuclei, and the calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. (orig.)

  6. A contribution Monte Carlo method

    International Nuclear Information System (INIS)

    Aboughantous, C.H.

    1994-01-01

    A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time

  7. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  8. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  9. CMS Monte Carlo production in the WLCG computing grid

    International Nuclear Information System (INIS)

    Hernandez, J M; Kreuzer, P; Hof, C; Khomitch, A; Mohapatra, A; Filippis, N D; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Weirdt, S D; Maes, J; Mulders, P v; Villella, I; Wakefield, S; Guan, W; Fanfani, A; Evans, D; Flossdorf, A

    2008-01-01

    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day

  10. Understanding the T2 traffic in CMS during Run-1

    CERN Document Server

    T, Wildish

    2015-01-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes.Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community.Tier-2 to Tier-2 traffic may also traverse parts of the WAN ...

  11. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  12. Elements of Monte Carlo techniques

    International Nuclear Information System (INIS)

    Nagarajan, P.S.

    2000-01-01

    The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc

  13. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  14. Geometrical splitting in Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Elperin, T.; Dudziak, D.J.

    1982-01-01

    A statistical model is presented by which a direct statistical approach yielded an analytic expression for the second moment, the variance ratio, and the benefit function in a model of an n surface-splitting Monte Carlo game. In addition to the insight into the dependence of the second moment on the splitting parameters the main importance of the expressions developed lies in their potential to become a basis for in-code optimization of splitting through a general algorithm. Refs

  15. Extending canonical Monte Carlo methods

    International Nuclear Information System (INIS)

    Velazquez, L; Curilef, S

    2010-01-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model

  16. Non statistical Monte-Carlo

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-04-01

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  17. BREM5 electroweak Monte Carlo

    International Nuclear Information System (INIS)

    Kennedy, D.C. II.

    1987-01-01

    This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures

  18. 40 CFR 1054.501 - How do I run a valid emission test?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false How do I run a valid emission test... Procedures § 1054.501 How do I run a valid emission test? (a) Applicability. This subpart is addressed to you... provisions of 40 CFR 1065.405 describes how to prepare an engine for testing. However, you may consider...

  19. Run Clever - No difference in risk of injury when comparing progression in running volume and running intensity in recreational runners

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik

    2018-01-01

    Background/aim: The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression...... in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods: Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed...... participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results: After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference...

  20. New Monte Carlo approach to the adjoint Boltzmann equation

    International Nuclear Information System (INIS)

    De Matteis, A.; Simonini, R.

    1978-01-01

    A class of stochastic models for the Monte Carlo integration of the adjoint neutron transport equation is described. Some current general methods are brought within this class, thus preparing the ground for subsequent comparisons. Monte Carlo integration of the adjoint Boltzmann equation can be seen as a simulation of the transport of mathematical particles with reaction kernels not normalized to unity. This last feature is a source of difficulty: It can influence the variance of the result negatively and also often leads to preparation of special ''libraries'' consisting of tables of normalization factors as functions of energy, presently used by several methods. These are the two main points that are discussed and that are taken into account to devise a nonmultigroup method of solution for a certain class of problems. Reactions considered in detail are radiative capture, elastic scattering, discrete levels and continuum inelastic scattering, for which the need for tables has been almost completely eliminated. The basic policy pursued to avoid a source of statistical fluctuations is to try to make the statistical weight of the traveling particle dependent only on its starting and current energies, at least in simple cases. The effectiveness of the sampling schemes proposed is supported by numerical comparison with other more general adjoint Monte Carlo methods. Computation of neutron flux at a point by means of an adjoint formulation is the problem taken as a test for numerical experiments. Very good results have been obtained in the difficult case of resonant cross sections

  1. Monte Carlo applications to core-following of the National Research Universal reactor (NRU)

    International Nuclear Information System (INIS)

    Nguyen, T.S.; Wang, X.; Leung, T.

    2014-01-01

    Reactor code TRIAD, relying on a two-group neutron diffusion model, is currently used for core-following of NRU - to track reactor assembly locations and burnups. The Monte Carlo (MCNP or SERPENT) full-reactor models of NRU can be used to provide the core power distribution for calculating fuel burnups, with WIMS-AECL providing fuel depletion calculations. The MCNP/WIMS core-following results were in good agreement with the measured data, within the expected biases. The Monte Carlo methods, still very time-consuming, need to be able to run faster before they can replace TRIAD for timely support of NRU operations. (author)

  2. LHCb siliicon detectors: the Run 1 to Run 2 transition and first experience of Run 2

    CERN Document Server

    Rinnert, Kurt

    2015-01-01

    LHCb is a dedicated experiment to study New Physics in the decays of heavy hadrons at the Large Hadron Collider (LHC) at CERN. The detector includes a high precision tracking system consisting of a silicon-strip vertex detector (VELO) surrounding the pp interaction region, a large- area silicon-strip detector located upstream of a dipole magnet (TT), and three stations of silicon- strip detectors (IT) and straw drift tubes placed downstream (OT). The operational transition of the silicon detectors VELO, TT and IT from LHC Run 1 to Run 2 and first Run 2 experiences will be presented. During the long shutdown of the LHC the silicon detectors have been maintained in a safe state and operated regularly to validate changes in the control infrastructure, new operational procedures, updates to the alarm systems and monitoring software. In addition, there have been some infrastructure related challenges due to maintenance performed in the vicinity of the silicon detectors that will be discussed. The LHCb silicon dete...

  3. Monte Carlo simulations of plutonium gamma-ray spectra

    International Nuclear Information System (INIS)

    Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.

    1993-01-01

    Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum

  4. Study of TXRF experimental system by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Costa, Ana Cristina M.; Leitao, Roberta G.; Lopes, Ricardo T.; Anjos, Marcelino J.; Conti, Claudio C.

    2011-01-01

    The Total-Reflection X-ray Fluorescence (TXRF) technique offers unique possibilities to study the concentrations of a wide range of trace elements in various types of samples. Besides that, the TXRF technique is widely used to study the trace elements in biological, medical and environmental samples due to its multielemental character as well as simplicity of sample preparation and quantification methods used. In general the TXRF experimental setup is not simple and might require substantial experimental efforts. On the other hand, in recent years, experimental TXRF portable systems have been developed. It has motivated us to develop our own TXRF portable system. In this work we presented a first step in order to optimize a TXRF experimental setup using Monte Carlo simulation by MCNP code. The results found show that the Monte Carlo simulation method can be used to investigate the development of a TXRF experimental system before its assembly. (author)

  5. Barefoot running: does it prevent injuries?

    Science.gov (United States)

    Murphy, Kelly; Curry, Emily J; Matzkin, Elizabeth G

    2013-11-01

    Endurance running has evolved over the course of millions of years and it is now one of the most popular sports today. However, the risk of stress injury in distance runners is high because of the repetitive ground impact forces exerted. These injuries are not only detrimental to the runner, but also place a burden on the medical community. Preventative measures are essential to decrease the risk of injury within the sport. Common running injuries include patellofemoral pain syndrome, tibial stress fractures, plantar fasciitis, and Achilles tendonitis. Barefoot running, as opposed to shod running (with shoes), has recently received significant attention in both the media and the market place for the potential to promote the healing process, increase performance, and decrease injury rates. However, there is controversy over the use of barefoot running to decrease the overall risk of injury secondary to individual differences in lower extremity alignment, gait patterns, and running biomechanics. While barefoot running may benefit certain types of individuals, differences in running stance and individual biomechanics may actually increase injury risk when transitioning to barefoot running. The purpose of this article is to review the currently available clinical evidence on barefoot running and its effectiveness for preventing injury in the runner. Based on a review of current literature, barefoot running is not a substantiated preventative running measure to reduce injury rates in runners. However, barefoot running utility should be assessed on an athlete-specific basis to determine whether barefoot running will be beneficial.

  6. HTML 5 up and running

    CERN Document Server

    Pilgrim, Mark

    2010-01-01

    If you don't know about the new features available in HTML5, now's the time to find out. This book provides practical information about how and why the latest version of this markup language will significantly change the way you develop for the Web. HTML5 is still evolving, yet browsers such as Safari, Mozilla, Opera, and Chrome already support many of its features -- and mobile browsers are even farther ahead. HTML5: Up & Running carefully guides you though the important changes in this version with lots of hands-on examples, including markup, graphics, and screenshots. You'll learn how to

  7. Inequality in the long run.

    Science.gov (United States)

    Piketty, Thomas; Saez, Emmanuel

    2014-05-23

    This Review presents basic facts regarding the long-run evolution of income and wealth inequality in Europe and the United States. Income and wealth inequality was very high a century ago, particularly in Europe, but dropped dramatically in the first half of the 20th century. Income inequality has surged back in the United States since the 1970s so that the United States is much more unequal than Europe today. We discuss possible interpretations and lessons for the future. Copyright © 2014, American Association for the Advancement of Science.

  8. Electroweak processes at Run 2

    CERN Document Server

    Spalla, Margherita; Sestini, Lorenzo

    2016-01-01

    We present a summary of the studies of the electroweak sector of the Standard Model at LHC after the first year of data taking of Run2, focusing on possible results to be achieved with the analysis of full 2015 and 2016 data. We discuss the measurements of W and Z boson production, with particular attention to the precision determination of basic Standard Model parameters, and the study of multi-boson interactions through the analysis of boson-boson final states. This work is the result of the collaboration between scientists from the ATLAS, CMS and LHCb experiments.

  9. Running gratings in photoconductive materials

    DEFF Research Database (Denmark)

    Kukhtarev, N. V.; Kukhtareva, T.; Lyuksyutov, S. F.

    2005-01-01

    Starting from the three-dimensional version of a standard photorefractive model (STPM), we obtain a reduced compact Set of equations for an electric field based on the assumption of a quasi-steady-state fast recombination. The equations are suitable for evaluation of a current induced by running...... gratings at small-contrast approximation and also are applicable for the description of space-charge wave domains. We discuss spatial domain and subharmonic beam formation in bismuth silicon oxide (BSO) crystals in the framework of the small-contrast approximation of STPM. The experimental results...

  10. Google Wave Up and Running

    CERN Document Server

    Ferrate, Andres

    2010-01-01

    Catch Google Wave, the revolutionary Internet protocol and web service that lets you communicate and collaborate in realtime. With this book, you'll understand how Google Wave integrates email, instant messaging (IM), wiki, and social networking functionality into a powerful and extensible platform. You'll also learn how to use its features, customize its functions, and build sophisticated extensions with Google Wave's open APIs and network protocol. Written for everyone -- from non-techies to ninja coders -- Google Wave: Up and Running provides a complete tour of this complex platform. You'

  11. Modular Control of Treadmill vs Overground Running

    Science.gov (United States)

    Farina, Dario; Kersting, Uwe Gustav

    2016-01-01

    Motorized treadmills have been widely used in locomotion studies, although a debate remains concerning the extrapolation of results obtained from treadmill experiments to overground locomotion. Slight differences between treadmill (TRD) and overground running (OVG) kinematics and muscle activity have previously been reported. However, little is known about differences in the modular control of muscle activation in these two conditions. Therefore, we aimed at investigating differences between motor modules extracted from TRD and OVG by factorization of multi-muscle electromyographic (EMG) signals. Twelve healthy men ran on a treadmill and overground at their preferred speed while we recorded tibial acceleration and surface EMG from 11 ipsilateral lower limb muscles. We extracted motor modules representing relative weightings of synergistic muscle activations by non-negative matrix factorization from 20 consecutive gait cycles. Four motor modules were sufficient to accurately reconstruct the EMG signals in both TRD and OVG (average reconstruction quality = 92±3%). Furthermore, a good reconstruction quality (80±7%) was obtained also when muscle weightings of one condition (either OVG or TRD) were used to reconstruct the EMG data from the other condition. The peak amplitudes of activation signals showed a similar timing (pattern) across conditions. The magnitude of peak activation for the module related to initial contact was significantly greater for OVG, whereas peak activation for modules related to leg swing and preparation to landing were greater for TRD. We conclude that TRD and OVG share similar muscle weightings throughout motion. In addition, modular control for TRD and OVG is achieved with minimal temporal adjustments, which were dependent on the phase of the running cycle. PMID:27064978

  12. The PS locomotive runs again

    CERN Multimedia

    2001-01-01

    Over forty years ago, the PS train entered service to steer the magnets of the accelerator into place... ... a service that was resumed last Tuesday. Left to right: Raymond Brown (CERN), Claude Tholomier (D.B.S.), Marcel Genolin (CERN), Gérard Saumade (D.B.S.), Ingo Ruehl (CERN), Olivier Carlier (D.B.S.), Patrick Poisot (D.B.S.), Christian Recour (D.B.S.). It is more than ten years since people at CERN heard the rumbling of the old PS train's steel wheels. Last Tuesday, the locomotive came back into service to be tested. It is nothing like the monstrous steel engines still running on conventional railways -just a small electric battery-driven vehicle employed on installing the magnets for the PS accelerator more than 40 years ago. To do so, it used the tracks that run round the accelerator. In fact, it is the grandfather of the LEP monorail. After PS was commissioned in 1959, the little train was used more and more rarely. This is because magnets never break down, or hardly ever! In fact, the loc...

  13. Effect of Minimalist Footwear on Running Efficiency

    Science.gov (United States)

    Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.

    2015-01-01

    Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304

  14. Monte Carlo Particle Lists: MCPL

    DEFF Research Database (Denmark)

    Kittelmann, Thomas; Klinkby, Esben Bryndt; Bergbäck Knudsen, Erik

    2017-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular...... simulation packages. Program summary: Program Title: MCPL. Program Files doi: http://dx.doi.org/10.17632/cby92vsv5g.1 Licensing provisions: CC0 for core MCPL, see LICENSE file for details. Programming language: C and C++ External routines/libraries: Geant4, MCNP, McStas, McXtrace Nature of problem: Saving...

  15. Structural change and forecasting long-run energy prices

    International Nuclear Information System (INIS)

    Bernard, J.T.; Khalaf, L.

    2004-01-01

    Fluctuating energy prices have a significant impact on the economies of industrialized nations. A recent study has shown a strong non-linear relationship between changes in oil prices and growth in gross domestic product (GDP). In order to forecast the behaviour of energy prices, a complete model must take into account domestic and international supply and demand conditions, market regulations, technological advances and geopolitics. In 1999, Pindyck suggested that for long-term forecasting, a simple model should be adopted where prices grow in real terms and at a fixed rate. This paper tests the statistical significance of Pindyck's suggested class of econometric equations that model the behaviour of long-run real energy prices. The models assume mean-reverting prices with continuous and random changes in their level and trend. They are estimated using Kalman filtering. The authors used simulation-based procedures to address the issue of non-standard test statistics and nuisance parameters. Results were reported for a standard Monte Carlo test and a maximized Monte Carlo test. Results shown statistically significant instabilities for coal and natural gas prices, but not for crude oil prices. Various models were differentiated using out-of-sample forecasting exercises. 25 refs., 3 tabs

  16. Comparing Effects of Feedstock and Run Conditions on Pyrolysis Products Produced at Pilot-Scale

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Timothy C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gaston, Katherine R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wilcox, Esther [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-19

    Fast pyrolysis is a promising pathway for mass production of liquid transportable biofuels. The Thermochemical Process Development Unit (TCPDU) pilot plant at NREL is conducting research to support the Bioenergy Technologies Office's 2017 goal of a $3 per gallon biofuel. In preparation for down select of feedstock and run conditions, four different feedstocks were run at three different run conditions. The products produced were characterized extensively. Hot pyrolysis vapors and light gasses were analyzed on a slip stream, and oil and char samples were characterized post run.

  17. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  18. Running Parallel Discrete Event Simulators on Sierra

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-03

    In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.

  19. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  20. IMPLEMENTASI METODE MARKOV CHAIN MONTE CARLO DALAM PENENTUAN HARGA KONTRAK BERJANGKA KOMODITAS

    Directory of Open Access Journals (Sweden)

    PUTU AMANDA SETIAWANI

    2015-06-01

    Full Text Available The aim of the research is to implement Markov Chain Monte Carlo (MCMC simulation method to price the futures contract of cocoa commodities. The result shows that MCMC is more flexible than Standard Monte Carlo (SMC simulation method because MCMC method uses hit-and-run sampler algorithm to generate proposal movements that are subsequently accepted or rejected with a probability that depends on the distribution of the target that we want to be achieved. This research shows that MCMC method is suitable to be used to simulate the model of cocoa commodity price movement. The result of this research is a simulation of future contract prices for the next three months and future contract prices that must be paid at the time the contract expires. Pricing future contract by using MCMC method will produce the cheaper contract price if it compares to Standard Monte Carlo simulation.

  1. Computational efficiency using the CYBER-205 computer for the PACER Monte Carlo Program

    International Nuclear Information System (INIS)

    Candelore, N.R.; Maher, C.M.; Gast, R.C.

    1985-09-01

    The use of the large memory of the CYBER-205 and its vector data handling logic produced speedups over scalar code ranging from a factor of 7 for unit cell calculations with relatively few compositions to a factor of 5 for problems having more detailed geometry and materials. By vectorizing the neutron tracking in PACER (the collision analysis remained in scalar code), an asymptotic value of 200 neutrons/cpu-second was achieved for a batch size of 10,000 neutrons. The complete vectorization of the Monte Carlo method as performed by Brown resulted in even higher speedups in neutron processing rates over the use of scalar code. Large speedups in neutron processing rates are beneficial not only to achieve more accurate results for the neutronics calculations which are routinely done using Monte Carlo, but also to extend the use of the Monte Carlo method to applications that were previously considered impractical because of large running times

  2. Comparison of ONETRAN calculations of electron beam dose profiles with Monte Carlo and experiment

    International Nuclear Information System (INIS)

    Garth, J.C.; Woolf, S.

    1987-01-01

    Electron beam dose profiles have been calculated using a multigroup, discrete ordinates solution of the Spencer-Lewis electron transport equation. This was accomplished by introducing electron transport cross-sections into the ONETRAN code in a simple manner. The authors' purpose is to ''benchmark'' this electron transport model and to demonstrate its accuracy and capabilities over the energy range from 30 keV to 20 MeV. Many of their results are compared with the extensive measurements and TIGER Monte Carlo data. In general the ONETRAN results are smoother, agree with TIGER within the statistical error of the Monte Carlo histograms and require about one tenth the running time of Monte Carlo

  3. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  4. Electron transport in radiotherapy using local-to-global Monte Carlo

    International Nuclear Information System (INIS)

    Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A.; Ballinger, C.T.

    1994-09-01

    Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ''steps'' to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given

  5. ATLAS inner detector: the Run 1 to Run 2 transition, and first experience from Run 2

    CERN Document Server

    Dobos, Daniel; The ATLAS collaboration

    2015-01-01

    The ATLAS experiment is equipped with a tracking system, the Inner Detector, built using different technologies, silicon planar sensors (pixel and micro-strip) and gaseous drift- tubes, all embedded in a 2T solenoidal magnetic field. For the LHC Run II, the system has been upgraded; taking advantage of the long showdown, the Pixel Detector was extracted from the experiment and brought to surface, to equip it with new service quarter panels, to repair modules and to ease installation of the Insertable B-Layer (IBL), a fourth layer of pixel detectors, installed in May 2014 between the existing Pixel Detector and a new smaller radius beam-pipe at a radius of 3.3 cm from the beam axis. To cope with the high radiation and pixel occupancy due to the proximity to the interaction point and the increase of Luminosity that LHC will face in Run-2, a new read-out chip within CMOS 130nm and two different silicon sensor pixel technologies (planar and 3D) have been developed. SCT and TRT systems consolidation was also carri...

  6. Adding run history to CLIPS

    Science.gov (United States)

    Tuttle, Sharon M.; Eick, Christoph F.

    1991-01-01

    To debug a C Language Integrated Production System (CLIPS) program, certain 'historical' information about a run is needed. It would be convenient for system builders to have the capability to request such information. We will discuss how historical Rete networks can be used for answering questions that help a system builder detect the cause of an error in a CLIPS program. Moreover, the cost of maintaining a historical Rete network is compared with that for a classical Rete network. We will demonstrate that the cost for assertions is only slightly higher for a historical Rete network. The cost for handling retraction could be significantly higher; however, we will show that by using special data structures that rely on hashing, it is also possible to implement retractions efficiently.

  7. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  8. Monte Carlo surface flux tallies

    International Nuclear Information System (INIS)

    Favorite, Jeffrey A.

    2010-01-01

    Particle fluxes on surfaces are difficult to calculate with Monte Carlo codes because the score requires a division by the surface-crossing angle cosine, and grazing angles lead to inaccuracies. We revisit the standard practice of dividing by half of a cosine 'cutoff' for particles whose surface-crossing cosines are below the cutoff. The theory behind this approximation is sound, but the application of the theory to all possible situations does not account for two implicit assumptions: (1) the grazing band must be symmetric about 0, and (2) a single linear expansion for the angular flux must be applied in the entire grazing band. These assumptions are violated in common circumstances; for example, for separate in-going and out-going flux tallies on internal surfaces, and for out-going flux tallies on external surfaces. In some situations, dividing by two-thirds of the cosine cutoff is more appropriate. If users were able to control both the cosine cutoff and the substitute value, they could use these parameters to make accurate surface flux tallies. The procedure is demonstrated in a test problem in which Monte Carlo surface fluxes in cosine bins are converted to angular fluxes and compared with the results of a discrete ordinates calculation.

  9. Robotic Bipedal Running : Increasing disturbance rejection

    NARCIS (Netherlands)

    Karssen, J.G.D.

    2013-01-01

    The goal of the research presented in this thesis is to increase the understanding of the human running gait. The understanding of the human running gait is essential for the development of devices, such as prostheses and orthoses, that enable disabled people to run or that enable able people to

  10. Barefoot running survey: Evidence from the field

    Directory of Open Access Journals (Sweden)

    David Hryvniak

    2014-06-01

    Conclusion: Prior studies have found that barefoot running often changes biomechanics compared to shod running with a hypothesized relationship of decreased injuries. This paper reports the result of a survey of 509 runners. The results suggest that a large percentage of this sample of runners experienced benefits or no serious harm from transitioning to barefoot or minimal shoe running.

  11. Understanding the T2 traffic in CMS during Run-1

    Science.gov (United States)

    T, Wildish

    2015-12-01

    In the run-up to Run-1 CMS was operating its facilities according to the MONARC model, where data-transfers were strictly hierarchical in nature. Direct transfers between Tier-2 nodes was excluded, being perceived as operationally intensive and risky in an era where the network was expected to be a major source of errors. By the end of Run-1 wide-area networks were more capable and stable than originally anticipated. The original data-placement model was relaxed, and traffic was allowed between Tier-2 nodes. Tier-2 to Tier-2 traffic in 2012 already exceeded the amount of Tier-2 to Tier-1 traffic, so it clearly has the potential to become important in the future. Moreover, while Tier-2 to Tier-1 traffic is mostly upload of Monte Carlo data, the Tier-2 to Tier-2 traffic represents data moved in direct response to requests from the physics analysis community. As such, problems or delays there are more likely to have a direct impact on the user community. Tier-2 to Tier-2 traffic may also traverse parts of the WAN that are at the 'edge' of our network, with limited network capacity or reliability compared to, say, the Tier-0 to Tier-1 traffic which goes the over LHCOPN network. CMS is looking to exploit technologies that allow us to interact with the network fabric so that it can manage our traffic better for us, this we hope to achieve before the end of Run-2. Tier-2 to Tier-2 traffic would be the most interesting use-case for such traffic management, precisely because it is close to the users' analysis and far from the 'core' network infrastructure. As such, a better understanding of our Tier-2 to Tier-2 traffic is important. Knowing the characteristics of our data-flows can help us place our data more intelligently. Knowing how widely the data moves can help us anticipate the requirements for network capacity, and inform the dynamic data placement algorithms we expect to have in place for Run-2. This paper presents an analysis of the CMS Tier-2 traffic during Run 1.

  12. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  13. Implementation of a Monte Carlo simulation environment for fully 3D PET on a high-performance parallel platform

    CERN Document Server

    Zaidi, H; Morel, Christian

    1998-01-01

    This paper describes the implementation of the Eidolon Monte Carlo program designed to simulate fully three-dimensional (3D) cylindrical positron tomographs on a MIMD parallel architecture. The original code was written in Objective-C and developed under the NeXTSTEP development environment. Different steps involved in porting the software on a parallel architecture based on PowerPC 604 processors running under AIX 4.1 are presented. Basic aspects and strategies of running Monte Carlo calculations on parallel computers are described. A linear decrease of the computing time was achieved with the number of computing nodes. The improved time performances resulting from parallelisation of the Monte Carlo calculations makes it an attractive tool for modelling photon transport in 3D positron tomography. The parallelisation paradigm used in this work is independent from the chosen parallel architecture

  14. Simultaneous Monte Carlo zero-variance estimates of several correlated means

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-08-01

    Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems

  15. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  16. The research program of the Liquid Scintillation Detector (LSD) in the Mont Blanc Laboratory

    Science.gov (United States)

    Dadykin, V. L.; Yakushev, V. F.; Korchagin, P. V.; Korchagin, V. B.; Malgin, A. S.; Ryassny, F. G.; Ryazhskaya, O. G.; Talochkin, V. P.; Zatsepin, G. T.; Badino, G.

    1985-01-01

    A massive (90 tons) liquid scintillation detector (LSD) has been running since October 1984 in the Mont Blanc Laboratory at a depth of 5,200 hg/sq cm of standard rock. The research program of the experiment covers a variety of topics in particle physics and astrophysics. The performance of the detector, the main fields of research are presented and the preliminary results are discussed.

  17. A quick and easy improvement of Monte Carlo codes for simulation

    Science.gov (United States)

    Lebrere, A.; Talhi, R.; Tripathy, M.; Pyée, M.

    The simulation of trials of independent random variables of given distribution is a critical element of running Monte-Carlo codes. This is usually performed by using pseudo-random number generators (and in most cases linearcongruential ones). We present here an alternative way to generate sequences with given statistical properties. This sequences are purely deterministic and are given by closed formulae, and can give in some cases better results than classical generators.

  18. Mathematical analysis of running performance and world running records.

    Science.gov (United States)

    Péronnet, F; Thibault, G

    1989-07-01

    The objective of this study was to develop an empirical model relating human running performance to some characteristics of metabolic energy-yielding processes using A, the capacity of anaerobic metabolism (J/kg); MAP, the maximal aerobic power (W/kg); and E, the reduction in peak aerobic power with the natural logarithm of race duration T, when T greater than TMAP = 420 s. Accordingly, the model developed describes the average power output PT (W/kg) sustained over any T as PT = [S/T(1 - e-T/k2)] + 1/T integral of T O [BMR + B(1 - e-t/k1)]dt where S = A and B = MAP - BMR (basal metabolic rate) when T less than TMAP; and S = A + [Af ln(T/TMAP)] and B = (MAP - BMR) + [E ln(T/TMAP)] when T greater than TMAP; k1 = 30 s and k2 = 20 s are time constants describing the kinetics of aerobic and anaerobic metabolism, respectively, at the beginning of exercise; f is a constant describing the reduction in the amount of energy provided from anaerobic metabolism with increasing T; and t is the time from the onset of the race. This model accurately estimates actual power outputs sustained over a wide range of events, e.g., average absolute error between actual and estimated T for men's 1987 world records from 60 m to the marathon = 0.73%. In addition, satisfactory estimations of the metabolic characteristics of world-class male runners were made as follows: A = 1,658 J/kg; MAP = 83.5 ml O2.kg-1.min-1; 83.5% MAP sustained over the marathon distance. Application of the model to analysis of the evolution of A, MAP, and E, and of the progression of men's and women's world records over the years, is presented.

  19. General Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Moore, J.G.

    1974-01-01

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  20. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  1. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  2. Progression in Running Intensity or Running Volume and the Development of Specific Injuries in Recreational Runners

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik

    2018-01-01

    -training. Participants were randomized to one of two running schedules: Schedule Intensity(Sch-I) or Schedule Volume(Sch-V). Sch-I progressed the amount of high intensity running (≥88% VO2max) each week. Sch-V progressed total weekly running volume. Global positioning system watch or smartphone collected data on running...

  3. LHCf completes its first run

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    LHCf, one of the three smaller experiments at the LHC, has completed its first run. The detectors were removed last week and the analysis of data is continuing. The first results will be ready by the end of the year.   One of the two LHCf detectors during the removal operations inside the LHC tunnel. LHCf is made up of two independent detectors located in the tunnel 140 m either side of the ATLAS collision point. The experiment studies the secondary particles created during the head-on collisions in the LHC because they are similar to those created in a cosmic ray shower produced when a cosmic particle hits the Earth’s atmosphere. The focus of the experiment is to compare the various shower models used to estimate the primary energy of ultra-high-energy cosmic rays. The energy of proton-proton collisions at the LHC will be equivalent to a cosmic ray of 1017eV hitting the atmosphere, very close to the highest energies observed in the sky. “We have now completed the fir...

  4. Daytime Running Lights. Public Consultation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-12-15

    The Road Safety Authority is considering the policy options available to promote the use of Daytime Running Lights (DRL), including the possibility of mandating the use of DRL on all vehicles. An EC Directive would make DRL mandatory for new vehicles from 2011 onwards and by 2024 it is predicted that due to the natural replacement of the national fleet, almost all vehicles would be equipped with DRL. The RSA is inviting views on introducing DRL measures earlier, whereby all road vehicles would be required to use either dipped head lights during hours of daylight or dedicated DRL from next year onwards. The use of DRL has been found to enhance the visibility of vehicles, thereby increasing road safety by reducing the number and severity of collisions. This paper explores the benefits of DRL and the implications for all road users including pedestrians, cyclists and motorcyclists. In order to ensure a comprehensive consideration of all the issues, the Road Safety Authority is seeking the views and advice of interested parties.

  5. SRNA-2K5, Proton Transport Using 3-D by Monte Carlo Techniques

    International Nuclear Information System (INIS)

    Ilic, Radovan D.

    2005-01-01

    1 - Description of program or function: SRNA-2K5 performs Monte Carlo transport simulation of proton in 3D source and 3D geometry of arbitrary materials. The proton transport based on condensed history model, and on model of compound nuclei decays that creates in nonelastic nuclear interaction by proton absorption. 2 - Methods: The SRNA-2K5 package is developed for time independent simulation of proton transport by Monte Carlo techniques for numerical experiments in complex geometry, using PENGEOM from PENELOPE with different material compositions, and arbitrary spectrum of proton generated from the 3D source. This package developed for 3D proton dose distribution in proton therapy and dosimetry, and it was based on the theory of multiple scattering. The compound nuclei decay was simulated by our and Russian MSDM models using ICRU 49 and ICRU 63 data. If protons trajectory is divided on great number of steps, protons passage can be simulated according to Berger's Condensed Random Walk model. Conditions of angular distribution and fluctuation of energy loss determinate step length. Physical picture of these processes is described by stopping power, Moliere's angular distribution, Vavilov's distribution with Sulek's correction per all electron orbits, and Chadwick's cross sections for nonelastic nuclear interactions, obtained by his GNASH code. According to physical picture of protons passage and with probabilities of protons transition from previous to next stage, which is prepared by SRNADAT program, simulation of protons transport in all SRNA codes runs according to usual Monte Carlo scheme: (i) proton from the spectrum prepared for random choice of energy, position and space angle is emitted from the source; (ii) proton is loosing average energy on the step; (iii) on that step, proton experience a great number of collisions, and it changes direction of movement randomly chosen from angular distribution; (iv) random fluctuation is added to average energy loss; (v

  6. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  7. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  8. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  9. A midway forward-adjoint coupling method for neutron and photon Monte Carlo transport

    International Nuclear Information System (INIS)

    Serov, I.V.; John, T.M.; Hoogenboom, J.E.

    1999-01-01

    The midway Monte Carlo method for calculating detector responses combines a forward and an adjoint Monte Carlo calculation. In both calculations, particle scores are registered at a surface to be chosen by the user somewhere between the source and detector domains. The theory of the midway response determination is developed within the framework of transport theory for external sources and for criticality theory. The theory is also developed for photons, which are generated at inelastic scattering or capture of neutrons. In either the forward or the adjoint calculation a so-called black absorber technique can be applied; i.e., particles need not be followed after passing the midway surface. The midway Monte Carlo method is implemented in the general-purpose MCNP Monte Carlo code. The midway Monte Carlo method is demonstrated to be very efficient in problems with deep penetration, small source and detector domains, and complicated streaming paths. All the problems considered pose difficult variance reduction challenges. Calculations were performed using existing variance reduction methods of normal MCNP runs and using the midway method. The performed comparative analyses show that the midway method appears to be much more efficient than the standard techniques in an overwhelming majority of cases and can be recommended for use in many difficult variance reduction problems of neutral particle transport

  10. Effects of a concurrent strength and endurance training on running performance and running economy in recreational marathon runners.

    Science.gov (United States)

    Ferrauti, Alexander; Bergermann, Matthias; Fernandez-Fernandez, Jaime

    2010-10-01

    The purpose of this study was to investigate the effects of a concurrent strength and endurance training program on running performance and running economy of middle-aged runners during their marathon preparation. Twenty-two (8 women and 14 men) recreational runners (mean ± SD: age 40.0 ± 11.7 years; body mass index 22.6 ± 2.1 kg·m⁻²) were separated into 2 groups (n = 11; combined endurance running and strength training program [ES]: 9 men, 2 women and endurance running [E]: 7 men, and 4 women). Both completed an 8-week intervention period that consisted of either endurance training (E: 276 ± 108 minute running per week) or a combined endurance and strength training program (ES: 240 ± 121-minute running plus 2 strength training sessions per week [120 minutes]). Strength training was focused on trunk (strength endurance program) and leg muscles (high-intensity program). Before and after the intervention, subjects completed an incremental treadmill run and maximal isometric strength tests. The initial values for VO2peak (ES: 52.0 ± 6.1 vs. E: 51.1 ± 7.5 ml·kg⁻¹·min⁻¹) and anaerobic threshold (ES: 3.5 ± 0.4 vs. E: 3.4 ± 0.5 m·s⁻¹) were identical in both groups. A significant time × intervention effect was found for maximal isometric force of knee extension (ES: from 4.6 ± 1.4 to 6.2 ± 1.0 N·kg⁻¹, p marathon running velocities (2.4 and 2.8 m·s⁻¹) and submaximal blood lactate thresholds (2.0, 3.0, and 4.0 mmol·L⁻¹). Stride length and stride frequency also remained unchanged. The results suggest no benefits of an 8-week concurrent strength training for running economy and coordination of recreational marathon runners despite a clear improvement in leg strength, maybe because of an insufficient sample size or a short intervention period.

  11. Impact Accelerations of Barefoot and Shod Running.

    Science.gov (United States)

    Thompson, M; Seegmiller, J; McGowan, C P

    2016-05-01

    During the ground contact phase of running, the body's mass is rapidly decelerated resulting in forces that propagate through the musculoskeletal system. The repetitive attenuation of these impact forces is thought to contribute to overuse injuries. Modern running shoes are designed to reduce impact forces, with the goal to minimize running related overuse injuries. Additionally, the fore/mid foot strike pattern that is adopted by most individuals when running barefoot may reduce impact force transmission. The aim of the present study was to compare the effects of the barefoot running form (fore/mid foot strike & decreased stride length) and running shoes on running kinetics and impact accelerations. 10 healthy, physically active, heel strike runners ran in 3 conditions: shod, barefoot and barefoot while heel striking, during which 3-dimensional motion analysis, ground reaction force and accelerometer data were collected. Shod running was associated with increased ground reaction force and impact peak magnitudes, but decreased impact accelerations, suggesting that the midsole of running shoes helps to attenuate impact forces. Barefoot running exhibited a similar decrease in impact accelerations, as well as decreased impact peak magnitude, which appears to be due to a decrease in stride length and/or a more plantarflexed position at ground contact. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Quality assurance for the ALICE Monte Carlo procedure

    CERN Document Server

    Ajaz, M; Hristov, Peter; Revol, Jean Pierre

    2009-01-01

    We implement the already existing macro,$ALICE_ROOT/STEER /CheckESD.C that is ran after reconstruction to compute the physics efficiency, as a task that will run on proof framework like CAF. The task was implemented in a C++ class called AliAnalysisTaskCheckESD and it inherits from AliAnalysisTaskSE base class. The function of AliAnalysisTaskCheckESD is to compute the ratio of the number of reconstructed particles to the number of particle generated by the Monte Carlo generator.The class AliAnalysisTaskCheckESD was successfully implemented. It was used during the production for first physics and permitted to discover several problems (missing track in the MUON arm reconstruction, low efficiency in the PHOS detector etc.). The code is committed to the SVN repository and will become standard tool for quality assurance.

  13. Monte Carlo modeling of the Fastscan whole body counter response

    International Nuclear Information System (INIS)

    Graham, H.R.; Waller, E.J.

    2015-01-01

    Monte Carlo N-Particle (MCNP) was used to make a model of the Fastscan for the purpose of calibration. Two models were made one for the Pickering Nuclear Site, and one for the Darlington Nuclear Site. Once these models were benchmarked and found to be in good agreement, simulations were run to study the effect different sized phantoms had on the detected response, and the shielding effect of torso fat was not negligible. Simulations into the nature of a source being positioned externally on the anterior or posterior of a person were also conducted to determine a ratio that could be used to determine if a source is externally or internally placed. (author)

  14. Monte Carlo sampling strategies for lattice gauge calculations

    International Nuclear Information System (INIS)

    Guralnik, G.; Zemach, C.; Warnock, T.

    1985-01-01

    We have sought to optimize the elements of the Monte Carlo processes for thermalizing and decorrelating sequences of lattice gauge configurations and for this purpose, to develop computational and theoretical diagnostics to compare alternative techniques. These have been applied to speed up generations of random matrices, compare heat bath and Metropolis stepping methods, and to study autocorrelations of sequences in terms of the classical moment problem. The efficient use of statistically correlated lattice data is an optimization problem depending on the relation between computer times to generate lattice sequences of sufficiently small correlation and times to analyze them. We can solve this problem with the aid of a representation of auto-correlation data for various step lags as moments of positive definite distributions, using methods known for the moment problem to put bounds on statistical variances, in place of estimating the variances by too-lengthy computer runs

  15. The neutron instrument Monte Carlo library MCLIB: Recent developments

    International Nuclear Information System (INIS)

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.; Thelliez, T.G.

    1998-01-01

    A brief review is given of the developments since the ICANS-XIII meeting made in the neutron instrument design codes using the Monte Carlo library MCLIB. Much of the effort has been to assure that the library and the executing code MC RUN connect efficiently with the World Wide Web application MC-WEB as part of the Los Alamos Neutron Instrument Simulation Package (NISP). Since one of the most important features of MCLIB is its open structure and capability to incorporate any possible neutron transport or scattering algorithm, this document describes the current procedure that would be used by an outside user to add a feature to MCLIB. Details of the calling sequence of the core subroutine OPERATE are discussed, and questions of style are considered and additional guidelines given. Suggestions for standardization are solicited, as well as code for new algorithms

  16. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    Science.gov (United States)

    Anderson, Amos Gerald

    2010-06-01

    combination of Generalized Valence Bond wavefunctions, improved correlation functions, and stabilized weighting techniques for calculations run on graphics cards, represents a new way for using Quantum Monte Carlo to study arbitrarily sized molecules.

  17. LHC Report: Run 1 – the final flurry

    CERN Multimedia

    Mike Lamont for the LHC team

    2013-01-01

    The proton-lead run ended early on the morning of Sunday, 10 February. The run can be considered an unqualified success and a testament to the painstaking preparation by the ion team. It was followed by a few short days of proton-proton collisions at intermediate energy, after which the final physics beams of what is now being called Run 1 (2009 – 2013) were dumped at 07:24 on Thursday, 14 February.   The five weeks of operations originally scheduled for 2013 had two main objectives: the delivery of 30 inverse nanobarns with proton-lead collisions; and around 5 inverse picobarns of proton-proton collisions at a beam energy of 1.38 TeV. Both of these objectives were met. As described in previous reports, the proton-lead run has gone remarkably well for a completely novel operational mode. However, there were some issues following the switch of beam direction on Friday, 1 February. In this exercise the ions become the clockwise beam and the experiments received lead-proton instead of ...

  18. The ATLAS Trigger system upgrade and performance in Run 2

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the LHC Run-2 in 2015/16 at a centre-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. In order to prepare for the anticipated further luminosity increase of the LHC in 2017/18, improving the trigger performance remain...

  19. [Physiological differences between cycling and running].

    Science.gov (United States)

    Millet, Grégoire

    2009-08-05

    This review compares the differences in systemic responses (VO2max, anaerobic threshold, heart rate and economy) and in underlying mechanisms of adaptation (ventilatory and hemodynamic and neuromuscular responses) between cycling and running. VO2max is specific to the exercise modality. Overall, there is more physiological training transfer from running to cycling than vice-versa. Several other physiological differences between cycling and running are discussed: HR is different between the two activities both for maximal and sub-maximal intensities. The delta efficiency is higher in running. Ventilation is more impaired in cycling than running due to mechanical constraints. Central fatigue and decrease in maximal strength are more important after prolonged exercise in running than in cycling.

  20. Design of ProjectRun21

    DEFF Research Database (Denmark)

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik

    2017-01-01

    BACKGROUND: Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow......-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running...... the association between running experience or running pace and the risk of running-related injury. METHODS: Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one...

  1. Should the Air Force Teach Running Technique

    Science.gov (United States)

    2012-02-15

    barefoot running, and gait training techniques. Current research indicates efficiencies in running with a forefoot or midfoot- strike gait, and a...recent retrospective study showed a lower injury rate in forefoot - strike runners as compared with heel- strike runners. However, there are no...barefoot-like” fashion and allows a forefoot or midfoot- strike gait, as opposed to the heel- strike gait style often seen with traditional running

  2. Running-in as an Engineering Optimization

    OpenAIRE

    Jamari, Jamari

    2007-01-01

    Running-in is a process which can be found in daily lives. This phenomenon occurs after the start of the contact between fresh solid surfaces, resulting in changes in the surface topography, friction and wear. Before the contacting engineering solid surfaces reach a steady-state operation situation this running-n enhances the contact performance. Running-in is very complex and is a vast problem area. A lot of variable occurs in the running-in process, physically, mechanically or chemically. T...

  3. Run 2 ATLAS Trigger and Detector Performance

    CERN Document Server

    Solovyanov, Oleg; The ATLAS collaboration

    2018-01-01

    The 2nd LHC run has started in June 2015 with a proton-proton centre-of-mass collision energy of 13 TeV. During the years 2016 and 2017, LHC delivered an unprecedented amount of luminosity under the ever-increasing challenging conditions in terms of peak luminosity, pile-up and trigger rates. In this talk, the LHC running conditions and the improvements made to the ATLAS experiment in the course of Run 2 will be discussed, and the latest ATLAS detector and ATLAS trigger performance results from the Run 2 will be presented.

  4. How to run ions in the future?

    International Nuclear Information System (INIS)

    Küchler, D; Manglunki, D; Scrivens, R

    2014-01-01

    In the light of different running scenarios potential source improvements will be discussed (e.g. one month every year versus two month every other year and impact of the different running options [e.g. an extended ion run] on the source). As the oven refills cause most of the down time the oven design and refilling strategies will be presented. A test stand for off-line developments will be taken into account. Also the implications on the necessary manpower for extended runs will be discussed

  5. Will ALICE run in the HL-LHC era?

    International Nuclear Information System (INIS)

    Wessels, J.P.

    2012-01-01

    We will present the perspectives for ion running in the HL-LHC era. In particular, ALICE is preparing a significant upgrade of its rate capabilities and is further extending its particle identification potential. This paves the way for heavy ion physics at unprecedented luminosities, which are expected in the HL-LHC era with the heaviest ions. Here, we outline a scenario, in which ALICE will be taking data at a luminosity of L > 6*10 27 cm -2 *s -1 for Pb-Pb with the aim of collecting at least 10 nb -1 . The potential interest of data-taking during high luminosity proton runs for ATLAS and CMS will also be commented. (author)

  6. Monte Carlo approaches to light nuclei

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs

  7. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  8. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  9. Monte Carlo Codes Invited Session

    International Nuclear Information System (INIS)

    Trama, J.C.; Malvagi, F.; Brown, F.

    2013-01-01

    This document lists 22 Monte Carlo codes used in radiation transport applications throughout the world. For each code the names of the organization and country and/or place are given. We have the following computer codes. 1) ARCHER, USA, RPI; 2) COG11, USA, LLNL; 3) DIANE, France, CEA/DAM Bruyeres; 4) FLUKA, Italy and CERN, INFN and CERN; 5) GEANT4, International GEANT4 collaboration; 6) KENO and MONACO (SCALE), USA, ORNL; 7) MC21, USA, KAPL and Bettis; 8) MCATK, USA, LANL; 9) MCCARD, South Korea, Seoul National University; 10) MCNP6, USA, LANL; 11) MCU, Russia, Kurchatov Institute; 12) MONK and MCBEND, United Kingdom, AMEC; 13) MORET5, France, IRSN Fontenay-aux-Roses; 14) MVP2, Japan, JAEA; 15) OPENMC, USA, MIT; 16) PENELOPE, Spain, Barcelona University; 17) PHITS, Japan, JAEA; 18) PRIZMA, Russia, VNIITF; 19) RMC, China, Tsinghua University; 20) SERPENT, Finland, VTT; 21) SUPERMONTECARLO, China, CAS INEST FDS Team Hefei; and 22) TRIPOLI-4, France, CEA Saclay

  10. Advanced computers and Monte Carlo

    International Nuclear Information System (INIS)

    Jordan, T.L.

    1979-01-01

    High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables

  11. Adaptive Markov Chain Monte Carlo

    KAUST Repository

    Jadoon, Khan

    2016-08-08

    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.

  12. CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics

    Science.gov (United States)

    Vandenbroucke, Bert; Wood, Kenneth

    2018-02-01

    CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.

  13. Equations describing contamination of run of mine coal with dirt in the Upper Silesian Coalfield

    Energy Technology Data Exchange (ETDEWEB)

    Winiewski, J J

    1977-12-01

    Statistical analysis proved that contamination with dirt of run of mine coal from seams in the series 200 to 600 of the Upper Silesian Coalfield depends on the average ash content of a given raw coal. A regression equation is deduced for coarse and fine sizes of each coal. These equations can be used to predict the degree of contamination of run of mine coal to an accuracy sufficient for coal preparation purposes.

  14. Responding for sucrose and wheel-running reinforcement: effect of pre-running.

    Science.gov (United States)

    Belke, Terry W

    2006-01-10

    Six male albino Wistar rats were placed in running wheels and exposed to a fixed interval 30-s schedule that produced either a drop of 15% sucrose solution or the opportunity to run for 15s as reinforcing consequences for lever pressing. Each reinforcer type was signaled by a different stimulus. To assess the effect of pre-running, animals were allowed to run for 1h prior to a session of responding for sucrose and running. Results showed that, after pre-running, response rates in the later segments of the 30-s schedule decreased in the presence of a wheel-running stimulus and increased in the presence of a sucrose stimulus. Wheel-running rates were not affected. Analysis of mean post-reinforcement pauses (PRP) broken down by transitions between successive reinforcers revealed that pre-running lengthened pausing in the presence of the stimulus signaling wheel running and shortened pauses in the presence of the stimulus signaling sucrose. No effect was observed on local response rates. Changes in pausing in the presence of stimuli signaling the two reinforcers were consistent with a decrease in the reinforcing efficacy of wheel running and an increase in the reinforcing efficacy of sucrose. Pre-running decreased motivation to respond for running, but increased motivation to work for food.

  15. The Effect of Training in Minimalist Running Shoes on Running Economy.

    Science.gov (United States)

    Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain

    2015-09-01

    The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear

  16. Middle cerebral artery blood velocity during running

    NARCIS (Netherlands)

    Lyngeraa, T. S.; Pedersen, L. M.; Mantoni, T.; Belhage, B.; Rasmussen, L. S.; van Lieshout, J. J.; Pott, F. C.

    2013-01-01

    Running induces characteristic fluctuations in blood pressure (BP) of unknown consequence for organ blood flow. We hypothesized that running-induced BP oscillations are transferred to the cerebral vasculature. In 15 healthy volunteers, transcranial Doppler-determined middle cerebral artery (MCA)

  17. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  18. Running with technology: Where are we heading?

    DEFF Research Database (Denmark)

    Jensen, Mads Møller; Mueller, Florian 'Floyd'

    2014-01-01

    technique- related information in run-training interfaces. From that finding, this paper presents three questions to be addressed by designers of future run-training interfaces. We believe that addressing these questions will support creation of expedient interfaces that improve runners’ technique...

  19. The Second Student-Run Homeless Shelter

    Science.gov (United States)

    Seider, Scott C.

    2012-01-01

    From 1983-2011, the Harvard Square Homeless Shelter (HSHS) in Cambridge, Massachusetts, was the only student-run homeless shelter in the United States. However, college students at Villanova, Temple, Drexel, the University of Pennsylvania, and Swarthmore drew upon the HSHS model to open their own student-run homeless shelter in Philadelphia,…

  20. Performance evaluation and financial market runs

    NARCIS (Netherlands)

    Wagner, W.B.

    2013-01-01

    This paper develops a model in which performance evaluation causes runs by fund managers and results in asset fire sales. Performance evaluation nonetheless is efficient as it disciplines managers. Optimal performance evaluation combines absolute and relative components in order to make runs less

  1. Impact of Running Away on Girls' Pregnancy

    Science.gov (United States)

    Thrane, Lisa E.; Chen, Xiaojin

    2012-01-01

    This study assessed the impact of running away on pregnancy in the subsequent year among U.S. adolescents. We also investigated interactions between running away and sexual assault, romance, and school disengagement. Pregnancy among females between 11 and 17 years (n = 6100) was examined utilizing the Longitudinal Study of Adolescent Health (Add…

  2. Teaching Bank Runs with Classroom Experiments

    Science.gov (United States)

    Balkenborg, Dieter; Kaplan, Todd; Miller, Timothy

    2011-01-01

    Once relegated to cinema or history lectures, bank runs have become a modern phenomenon that captures the interest of students. In this article, the authors explain a simple classroom experiment based on the Diamond-Dybvig model (1983) to demonstrate how a bank run--a seemingly irrational event--can occur rationally. They then present possible…

  3. Training errors and running related injuries

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Østergaard; Buist, Ida; Sørensen, Henrik

    2012-01-01

    The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries.......The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries....

  4. Minimum Wage Effects in the Longer Run

    Science.gov (United States)

    Neumark, David; Nizalova, Olena

    2007-01-01

    Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…

  5. Long Run Relationship Between Agricultural Production And ...

    African Journals Online (AJOL)

    The study sought to estimate the impact of agricultural production on the long run economic growth in Nigeria using the Vector Error Correction Methodology. The result shows that long run relationship exists between agricultural production and economic growth in Nigeria. Among the variables in the model, crop production ...

  6. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  7. Monte Carlo simulations for generic granite repository studies

    Energy Technology Data Exchange (ETDEWEB)

    Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL

    2010-12-08

    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

  8. A PC version of the Monte Carlo criticality code OMEGA

    International Nuclear Information System (INIS)

    Seifert, E.

    1996-05-01

    A description of the PC version of the Monte Carlo criticality code OMEGA is given. The report contains a general description of the code together with a detailed input description. Furthermore, some examples are given illustrating the generation of an input file. The main field of application is the calculation of the criticality of arrangements of fissionable material. Geometrically complicated arrangements that often appear inside and outside a reactor, e.g. in a fuel storage or transport container, can be considered essentially without geometrical approximations. For example, the real geometry of assemblies containing hexagonal or square lattice structures can be described in full detail. Moreover, the code can be used for special investigations in the field of reactor physics and neutron transport. Many years of practical experience and comparison with reference cases have shown that the code together with the built-in data libraries gives reliable results. OMEGA is completely independent on other widely used criticality codes (KENO, MCNP, etc.), concerning programming and the data base. It is a good practice to run difficult criticality safety problems by different independent codes in order to mutually verify the results. In this way, OMEGA can be used as a redundant code within the family of criticality codes. An advantage of OMEGA is the short calculation time: A typical criticality safety application takes only a few minutes on a Pentium PC. Therefore, the influence of parameter variations can simply be investigated by running many variants of a problem. (orig.)

  9. Monte Carlo dose calculation algorithm on a distributed system

    International Nuclear Information System (INIS)

    Chauvie, Stephane; Dominoni, Matteo; Marini, Piergiorgio; Stasi, Michele; Pia, Maria Grazia; Scielzo, Giuseppe

    2003-01-01

    The main goal of modern radiotherapy, such as 3D conformal radiotherapy and intensity-modulated radiotherapy is to deliver a high dose to the target volume sparing the surrounding healthy tissue. The accuracy of dose calculation in a treatment planning system is therefore a critical issue. Among many algorithms developed over the last years, those based on Monte Carlo proven to be very promising in terms of accuracy. The most severe obstacle in application to clinical practice is the high time necessary for calculations. We have studied a high performance network of Personal Computer as a realistic alternative to a high-costs dedicated parallel hardware to be used routinely as instruments of evaluation of treatment plans. We set-up a Beowulf Cluster, configured with 4 nodes connected with low-cost network and installed MC code Geant4 to describe our irradiation facility. The MC, once parallelised, was run on the Beowulf Cluster. The first run of the full simulation showed that the time required for calculation decreased linearly increasing the number of distributed processes. The good scalability trend allows both statistically significant accuracy and good time performances. The scalability of the Beowulf Cluster system offers a new instrument for dose calculation that could be applied in clinical practice. These would be a good support particularly in high challenging prescription that needs good calculation accuracy in zones of high dose gradient and great dishomogeneities

  10. Scaling up ATLAS Database Release Technology for the LHC Long Run

    International Nuclear Information System (INIS)

    Borodin, M; Nevski, P; Vaniachine, A

    2011-01-01

    To overcome scalability limitations in database access on the Grid, ATLAS introduced the Database Release technology replicating databases in files. For years Database Release technology assured scalable database access for Monte Carlo production on the Grid. Since previous CHEP, Database Release technology was used successfully in ATLAS data reprocessing on the Grid. Frozen Conditions DB snapshot guarantees reproducibility and transactional consistency isolating Grid data processing tasks from continuous conditions updates at the 'live' Oracle server. Database Release technology fully satisfies the requirements of ATLAS data reprocessing and Monte Carlo production. We parallelized the Database Release build workflow to avoid linear dependency of the build time on the length of LHC data-taking period. In recent data reprocessing campaigns the build time was reduced by an order of magnitude thanks to a proven master-worker architecture used in the Google MapReduce. We describe further Database Release optimizations scaling up the technology for the LHC long run.

  11. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  12. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  13. Frontiers of quantum Monte Carlo workshop: preface

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1985-01-01

    The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics

  14. Avariide kiuste Monte Carlosse / Aare Arula

    Index Scriptorium Estoniae

    Arula, Aare

    2007-01-01

    Vt. ka Tehnika dlja Vsehh nr. 3, lk. 26-27. 26. jaanuaril 1937 Tallinnast Monte Carlo tähesõidule startinud Karl Siitanit ja tema meeskonda ootasid ees seiklused, mis oleksid neile peaaegu elu maksnud

  15. Monte Carlo code development in Los Alamos

    International Nuclear Information System (INIS)

    Carter, L.L.; Cashwell, E.D.; Everett, C.J.; Forest, C.A.; Schrandt, R.G.; Taylor, W.M.; Thompson, W.L.; Turner, G.D.

    1974-01-01

    The present status of Monte Carlo code development at Los Alamos Scientific Laboratory is discussed. A brief summary is given of several of the most important neutron, photon, and electron transport codes. 17 references. (U.S.)

  16. Experience with the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)

    2007-06-15

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.

  17. Experience with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Hussein, E.M.A.

    2007-01-01

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  18. Monte Carlo Transport for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  19. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul

    2014-01-01

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error

  20. Aasta film - joonisfilm "Mont Blanc" / Verni Leivak

    Index Scriptorium Estoniae

    Leivak, Verni, 1966-

    2002-01-01

    Eesti Filmiajakirjanike Ühing andis aasta 2001 parima filmi tiitli Priit Tenderi joonisfilmile "Mont Blanc" : Eesti Joonisfilm 2001.Ka filmikriitikute eelistused kinodes ja televisioonis 2001. aastal näidatud filmide osas

  1. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  2. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  3. Orthopaedic Perspective on Barefoot and Minimalist Running.

    Science.gov (United States)

    Roth, Jonathan; Neumann, Julie; Tao, Matthew

    2016-03-01

    In recent years, there has been a movement toward barefoot and minimalist running. Advocates assert that a lack of cushion and support promotes a forefoot or midfoot strike rather than a rearfoot strike, decreasing the impact transient and stress on the hip and knee. Although the change in gait is theorized to decrease injury risk, this concept has not yet been fully elucidated. However, research has shown diminished symptoms of chronic exertional compartment syndrome and anterior knee pain after a transition to minimalist running. Skeptics are concerned that, because of the effects of the natural environment and the lack of a standardized transition program, barefoot running could lead to additional, unforeseen injuries. Studies have shown that, with the transition to minimalist running, there is increased stress on the foot and ankle and risk of repetitive stress injuries. Nonetheless, despite the large gap of evidence-based knowledge on minimalist running, the potential benefits warrant further research and consideration.

  4. Running injuries - changing trends and demographics.

    Science.gov (United States)

    Fields, Karl B

    2011-01-01

    Running injuries are common. Recently the demographic has changed, in that most runners in road races are older and injuries now include those more common in master runners. In particular, Achilles/calf injuries, iliotibial band injury, meniscus injury, and muscle injuries to the hamstrings and quadriceps represent higher percentages of the overall injury mix in recent epidemiologic studies compared with earlier ones. Evidence suggests that running mileage and previous injury are important predictors of running injury. Evidence-based research now helps guide the treatment of iliotibial band, patellofemoral syndrome, and Achilles tendinopathy. The use of topical nitroglycerin in tendinopathy and orthotics for the treatment of patellofemoral syndrome has moderate to strong evidence. Thus, more current knowledge about the changing demographics of runners and the application of research to guide treatment and, eventually, prevent running injury offers hope that clinicians can help reduce the high morbidity associated with long-distance running.

  5. Excessive Progression in Weekly Running Distance and Risk of Running-related Injuries

    DEFF Research Database (Denmark)

    Nielsen, R.O.; Parner, Erik Thorlund; Nohr, Ellen Aagaard

    2014-01-01

    Study Design An explorative, 1-year prospective cohort study. Objective To examine whether an association between a sudden change in weekly running distance and running-related injury varies according to injury type. Background It is widely accepted that a sudden increase in running distance...... is strongly related to injury in runners. But the scientific knowledge supporting this assumption is limited. Methods A volunteer sample of 874 healthy novice runners who started a self-structured running regimen were provided a global-positioning-system watch. After each running session during the study...... period, participants were categorized into 1 of the following exposure groups, based on the progression of their weekly running distance: less than 10% or regression, 10% to 30%, or more than 30%. The primary outcome was running-related injury. Results A total of 202 runners sustained a running...

  6. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  7. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  8. Monte Carlo method applied to medical physics

    International Nuclear Information System (INIS)

    Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.

    2000-01-01

    The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)

  9. The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012

    Science.gov (United States)

    Keen, David A.; Pusztai, László

    2013-11-01

    -ray scattering and modeling studiesL Hawelek, A Brodka, J C Dore, V Honkimaki and A Burian Local structure correlations in plastic cyclohexane—a reverse Monte Carlo studyNicholas P Funnell, Martin T Dove, Andrew L Goodwin, Simon Parsons and Matthew G Tucker Neutron powder diffraction and molecular dynamics study of superionic SrBr2S Hull, S T Norberg, S G Eriksson and C E Mohn Atomic order and cluster energetics of a 17 wt% Si-based glass versus the liquid phaseG S E Antipas, L Temleitner, K Karalis, L Pusztai and A Xenidis Total scattering analysis of cation coordination and vacancy pair distribution in Yb substituted Ō-Bi2O3G S E Antipas, L Temleitner, K Karalis, L Pusztai and A Xenidis Modification of the sampling algorithm for reverse Monte Carlo modeling with an insufficient data setSatoshi Sato and Kenji Maruyama The origin of diffuse scattering in crystalline carbon tetraiodideTemleitner and L Pusztai Silver environment and covalent network rearrangement in GeS3-Ag glassesL Rátkai, I Kaban, T Wágner, J Kolár, S Valková, Iva Voleská, B Beuneu and P Jóvári Reverse Monte Carlo study of spherical sample under non-periodic boundary conditions: the structure of Ru nanoparticles based on x-ray diffraction dataOrsolya Gereben and Valeri Petkov Total neutron scattering investigation of the structure of a cobalt gallium oxide spinel prepared by solvothermal oxidation of gallium metalHelen Y Playford, Alex C Hannon, Matthew G Tucker, Martin R Lees and Richard I Walton The structure of water in solutions containing di- and trivalent cations by empirical potential structure refinementDaniel T Bowron and Sofia Díaz Moreno The proton conducting electrolyte BaTi0.5In0.5O2.75: determination of the deuteron site and its local environmentStefan T Norberg, Seikh M H Rahman, Stephen Hull, Christopher S Knee and Sten G Eriksson Acidic properties of aqueous phosphoric acid solutions: a microscopic viewI Harsányi, L Pusztai, P Jóvári and B Beuneu Comparison of the atomic level

  10. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)

    2014-06-15

    virtual generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail

  11. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    International Nuclear Information System (INIS)

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-01-01

    generation of medical images and accurate estimation of radiation dose and other imaging parameters. For this, detailed computational phantoms of the patient anatomy must be utilized and implemented within the radiation transport code. Computational phantoms presently come in one of three format types, and in one of four morphometric categories. Format types include stylized (mathematical equation-based), voxel (segmented CT/MR images), and hybrid (NURBS and polygon mesh surfaces). Morphometric categories include reference (small library of phantoms by age at 50th height/weight percentile), patient-dependent (larger library of phantoms at various combinations of height/weight percentiles), patient-sculpted (phantoms altered to match the patient's unique outer body contour), and finally, patient-specific (an exact representation of the patient with respect to both body contour and internal anatomy). The existence and availability of these phantoms represents a very important advance for the simulation of realistic medical imaging applications using Monte Carlo methods. New Monte Carlo simulation codes need to be thoroughly validated before they can be used to perform novel research. Ideally, the validation process would involve comparison of results with those of an experimental measurement, but accurate replication of experimental conditions can be very challenging. It is very common to validate new Monte Carlo simulations by replicating previously published simulation results of similar experiments. This process, however, is commonly problematic due to the lack of sufficient information in the published reports of previous work so as to be able to replicate the simulation in detail. To aid in this process, the AAPM Task Group 195 prepared a report in which six different imaging research experiments commonly performed using Monte Carlo simulations are described and their results provided. The simulation conditions of all six cases are provided in full detail, with all

  12. Rocker shoe, minimalist shoe, and standard running shoe : A comparison of running economy

    NARCIS (Netherlands)

    Sobhani, Sobhan; Bredeweg, Steven; Dekker, Rienk; Kluitenberg, Bas; van den Heuvel, Edwin; Hijmans, Juha; Postema, Klaas

    Objectives: Running with rocker shoes is believed to prevent lower limb injuries. However, it is not clear how running in these shoes affects the energy expenditure. The purpose of this study was, therefore, to assess the effects of rocker shoes on running economy in comparison with standard and

  13. MCNP: a general Monte Carlo code for neutron and photon transport. Version 3A. Revision 2

    International Nuclear Information System (INIS)

    Briesmeister, J.F.

    1986-09-01

    This manual is a practical guide for the use of our general-purpose Monte Carlo code MCNP. The first chapter is a primer for the novice user. The second chapter describes the mathematics, data, physics, and Monte Carlo simulation found in MCNP. This discussion is not meant to be exhaustive - details of the particular techniques and of the Monte Carlo method itself will have to be found elsewhere. The third chapter shows the user how to prepare input for the code. The fourth chapter contains several examples, and the fifth chapter explains the output. The appendices show how to use MCNP on particular computer systems at the Los Alamos National Laboratory and also give details about some of the code internals that those who wish to modify the code may find useful. 57 refs

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  16. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  17. Running Economy from a Muscle Energetics Perspective

    Directory of Open Access Journals (Sweden)

    Jared R. Fletcher

    2017-06-01

    Full Text Available The economy of running has traditionally been quantified from the mass-specific oxygen uptake; however, because fuel substrate usage varies with exercise intensity, it is more accurate to express running economy in units of metabolic energy. Fundamentally, the understanding of the major factors that influence the energy cost of running (Erun can be obtained with this approach. Erun is determined by the energy needed for skeletal muscle contraction. Here, we approach the study of Erun from that perspective. The amount of energy needed for skeletal muscle contraction is dependent on the force, duration, shortening, shortening velocity, and length of the muscle. These factors therefore dictate the energy cost of running. It is understood that some determinants of the energy cost of running are not trainable: environmental factors, surface characteristics, and certain anthropometric features. Other factors affecting Erun are altered by training: other anthropometric features, muscle and tendon properties, and running mechanics. Here, the key features that dictate the energy cost during distance running are reviewed in the context of skeletal muscle energetics.

  18. The effect of footwear on running performance and running economy in distance runners.

    Science.gov (United States)

    Fuller, Joel T; Bellenger, Clint R; Thewlis, Dominic; Tsiros, Margarita D; Buckley, Jonathan D

    2015-03-01

    The effect of footwear on running economy has been investigated in numerous studies. However, no systematic review and meta-analysis has synthesised the available literature and the effect of footwear on running performance is not known. The aim of this systematic review and meta-analysis was to investigate the effect of footwear on running performance and running economy in distance runners, by reviewing controlled trials that compare different footwear conditions or compare footwear with barefoot. The Web of Science, Scopus, MEDLINE, CENTRAL (Cochrane Central Register of Controlled Trials), EMBASE, AMED (Allied and Complementary Medicine), CINAHL and SPORTDiscus databases were searched from inception up until April 2014. Included articles reported on controlled trials that examined the effects of footwear or footwear characteristics (including shoe mass, cushioning, motion control, longitudinal bending stiffness, midsole viscoelasticity, drop height and comfort) on running performance or running economy and were published in a peer-reviewed journal. Of the 1,044 records retrieved, 19 studies were included in the systematic review and 14 studies were included in the meta-analysis. No studies were identified that reported effects on running performance. Individual studies reported significant, but trivial, beneficial effects on running economy for comfortable and stiff-soled shoes [standardised mean difference (SMD) beneficial effect on running economy for cushioned shoes (SMD = 0.37; P beneficial effect on running economy for training in minimalist shoes (SMD = 0.79; P beneficial effects on running economy for light shoes and barefoot compared with heavy shoes (SMD running was identified (P running economy. Certain models of footwear and footwear characteristics can improve running economy. Future research in footwear performance should include measures of running performance.

  19. The PDF4LHC report on PDFs and LHC data : Results from Run I and preparation for Run II

    NARCIS (Netherlands)

    Rojo, Juan; Accardi, Alberto; Ball, Richard D.; Cooper-Sarkar, Amanda; Roeck, Albert de; Farry, Stephen; Ferrando, James; Forte, Stefano; Gao, Jun; Harland-Lang, Lucian; Huston, Joey; Glazov, Alexander; Gouzevitch, Maxime; Gwenlan, Claire; Lipka, Katerina; Lisovyi, Mykhailo; Mangano, Michelangelo L.; Nadolsky, Pavel; Perrozzi, Luca; Placakyte, Ringaile; Radescu, Voica; Salam, Gavin P.; Thorne, Robert S.

    2015-01-01

    The accurate determination of the Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterisation and precision Standard Model measurements to New Physics

  20. Middle cerebral artery blood velocity during running

    DEFF Research Database (Denmark)

    Lyngeraa, Tobias; Pedersen, Lars Møller; Mantoni, T

    2013-01-01

    for eight subjects, respectively, were excluded from analysis because of insufficient signal quality. Running increased mean arterial pressure and mean MCA velocity and induced rhythmic oscillations in BP and in MCA velocity corresponding to the difference between step rate and heart rate (HR) frequencies....... During running, rhythmic oscillations in arterial BP induced by interference between HR and step frequency impact on cerebral blood velocity. For the exercise as a whole, average MCA velocity becomes elevated. These results suggest that running not only induces an increase in regional cerebral blood flow...

  1. CMB constraints on running non-Gaussianity

    OpenAIRE

    Oppizzi, Filippo; Liguori, Michele; Renzi, Alessandro; Arroja, Frederico; Bartolo, Nicola

    2017-01-01

    We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the $f_{\\rm NL}$ running spectral index, $n_{\\rm NG}$, using WMAP 9-year data. Our final bounds (68\\% C.L.) read $-0.3< n_{\\rm NG}

  2. Running Injuries During Adolescence and Childhood.

    Science.gov (United States)

    Krabak, Brian J; Snitily, Brian; Milani, Carlo J E

    2016-02-01

    The popularity of running among young athletes has significantly increased over the past few decades. As the number of children who participate in running increases, so do the potential number of injuries to this group. Proper care of these athletes includes a thorough understanding of the unique physiology of the skeletally immature athlete and common injuries in this age group. Treatment should focus on athlete education, modification of training schedule, and correction of biomechanical deficits contributing to injury. Early identification and correction of these factors will allow a safe return to running sports. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Electricity prices and fuel costs. Long-run relations and short-run dynamics

    International Nuclear Information System (INIS)

    Mohammadi, Hassan

    2009-01-01

    The paper examines the long-run relation and short-run dynamics between electricity prices and three fossil fuel prices - coal, natural gas and crude oil - using annual data for the U.S. for 1960-2007. The results suggest (1) a stable long-run relation between real prices for electricity and coal (2) Bi-directional long-run causality between coal and electricity prices. (3) Insignificant long-run relations between electricity and crude oil and/or natural gas prices. And (4) no evidence of asymmetries in the adjustment of electricity prices to deviations from equilibrium. A number of implications are addressed. (author)

  4. Frequentist and Bayesian Orbital Parameter Estimaton from Radial Velocity Data Using RVLIN, BOOTTRAN, and RUN DMC

    Science.gov (United States)

    Nelson, Benjamin Earl; Wright, Jason Thomas; Wang, Sharon

    2015-08-01

    For this hack session, we will present three tools used in analyses of radial velocity exoplanet systems. RVLIN is a set of IDL routines used to quickly fit an arbitrary number of Keplerian curves to radial velocity data to find adequate parameter point estimates. BOOTTRAN is an IDL-based extension of RVLIN to provide orbital parameter uncertainties using bootstrap based on a Keplerian model. RUN DMC is a highly parallelized Markov chain Monte Carlo algorithm that employs an n-body model, primarily used for dynamically complex or poorly constrained exoplanet systems. We will compare the performance of these tools and their applications to various exoplanet systems.

  5. Report on the Oak Ridge workshop on Monte Carlo codes for relativistic heavy-ion collisions

    International Nuclear Information System (INIS)

    Awes, T.C.; Sorensen, S.P.

    1988-01-01

    In order to make detailed predictions for the case of purely hadronic matter, several Monte Carlo codes have been developed to describe relativistic nucleus-nucleus collisions. Although these various models build upon models of hadron-hadron interactions and have been fitted to reproduce hadron-hadron collision data, they have rather different pictures of the underlying hadron collision process and of subsequent particle production. Until now, the different Monte Carlo codes have, in general, been compared to different sets of experimental data, according to which results were readily available to the model builder or which Monte Carlo code was readily available to an experimental group. As a result, it has been difficult to draw firm conclusions about whether the observed deviations between experiments and calculations were due to deficiencies in the particular model, experimental discrepancies, or interesting effects beyond a simple superposition of nucleon-nucleon collisions. For this reason, it was decided that it would be productive to have a structured confrontation between the available experimental data and the many models of high-energy nuclear collisions in a manner in which it could be ensured that the computer codes were run correctly and the experimental acceptances were properly taken into account. With this purpose in mind, a Workshop on Monte Carlo Codes for Relativistic Heavy-Ion Collisions was organized at the Joint Institute for Heavy Ion Research at Oak Ridge National Laboratory from September 12--23, 1988. This paper reviews this workshop. 11 refs., 6 figs

  6. Common running musculoskeletal injuries among recreational half ...

    African Journals Online (AJOL)

    probing the prevalence and nature of running musculoskeletal injuries in the 12 months preceding ... or agony, and which prevented them from physical activity for ..... injuries to professional football players: Developing the UEFA model.

  7. TEK twisted gradient flow running coupling

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori

    2014-01-01

    We measure the running of the twisted gradient flow coupling in the Twisted Eguchi-Kawai (TEK) model, the SU(N) gauge theory on a single site lattice with twisted boundary conditions in the large N limit.

  8. Monte Carlo verification of polymer gel dosimetry applied to radionuclide therapy: a phantom study

    International Nuclear Information System (INIS)

    Gear, J I; Partridge, M; Flux, G D; Charles-Edwards, E

    2011-01-01

    This study evaluates the dosimetric performance of the polymer gel dosimeter 'Methacrylic and Ascorbic acid in Gelatin, initiated by Copper' and its suitability for quality assurance and analysis of I-131-targeted radionuclide therapy dosimetry. Four batches of gel were manufactured in-house and sets of calibration vials and phantoms were created containing different concentrations of I-131-doped gel. Multiple dose measurements were made up to 700 h post preparation and compared to equivalent Monte Carlo simulations. In addition to uniformly filled phantoms the cross-dose distribution from a hot insert to a surrounding phantom was measured. In this example comparisons were made with both Monte Carlo and a clinical scintigraphic dosimetry method. Dose-response curves generated from the calibration data followed a sigmoid function. The gels appeared to be stable over many weeks of internal irradiation with a delay in gel response observed at 29 h post preparation. This was attributed to chemical inhibitors and slow reaction rates of long-chain radical species. For this reason, phantom measurements were only made after 190 h of irradiation. For uniformly filled phantoms of I-131 the accuracy of dose measurements agreed to within 10% when compared to Monte Carlo simulations. A radial cross-dose distribution measured using the gel dosimeter compared well to that calculated with Monte Carlo. Small inhomogeneities were observed in the dosimeter attributed to non-uniform mixing of monomer during preparation. However, they were not detrimental to this study where the quantitative accuracy and spatial resolution of polymer gel dosimetry were far superior to that calculated using scintigraphy. The difference between Monte Carlo and gel measurements was of the order of a few cGy, whilst with the scintigraphic method differences of up to 8 Gy were observed. A manipulation technique is also presented which allows 3D scintigraphic dosimetry measurements to be compared to polymer

  9. Run-2 Supersymmetry searches in ATLAS

    CERN Document Server

    Soffer, Abner; The ATLAS collaboration

    2016-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. With the large increase in collision energy with the LHC Run-2 (from 8TeV to 13 TeV) the sensitivity to heavy strongly produced SUSY particles (squarks and gluinos) increases tremendously. This talk presents recent ATLAS Run-2 searches for such particles in final states including jets, missing transverse momentum, and possibly light leptons.

  10. Running heavy-quark masses in DIS

    International Nuclear Information System (INIS)

    Alekhin, S.; Moch, S.

    2011-07-01

    We report on determinations of the running mass for charm quarks from deep-inelastic scattering reactions. The method provides complementary information on this fundamental parameter from hadronic processes with space-like kinematics. The obtained values are consistent with but systematically lower than the world average as published by the PDG. We also address the consequences of the running mass scheme for heavy-quark parton distributions in global fits to deep-inelastic scattering data. (orig.)

  11. The meaning of running away for girls.

    Science.gov (United States)

    Peled, Einat; Cohavi, Ayelet

    2009-10-01

    The aim of this qualitative research was to understand how runaway girls perceive the processes involved in leaving home and the meaning they attribute to it. Findings are based on in-depth interviews with 10 Israeli girls aged 13-17 with a history of running away from home. The meaning of running away as it emerged from the girls' descriptions of their lives prior to leaving home was that of survival - both psychological and physical. The girls' stories centered on their evolving experiences of alienation, loneliness and detachment, and the failure of significant relationships at home and outside of home to provide them with the support they needed. These experiences laid the ground for the "final moments" before leaving, when a feeling of "no alternative," a hope for a better future, and various particular triggers led the girls to the decision to leave home. Participants' insights about the dynamics leading to running-away center on the meaning of family relationships, particularly those with the mother, as constituting the girl's psychological home. The girls seemed to perceive running away as an inevitability, rather than a choice, and even portrayed the running away as "living suicide." Yet, their stories clearly demonstrate their ability to cope and the possession of strengths and skills that enabled them to survive in extremely difficult home situations. The findings of this research highlight the importance of improving services for reaching out and supporting girls who are on the verge of running away from home. Such services should be tailored to the needs of girls who experience extreme but often silenced distress at home, and should facilitate alternative solutions to the girls' plight other than running away. An understanding of the dynamics leading to running away from the girls' perspective has the potential to improve the efficacy of services provided by contributing to the creation of a caring, empowering, understanding and trustful professional

  12. Personnel Preparation.

    Science.gov (United States)

    Fair, George, Ed.; Stodden, Robert, Ed.

    1981-01-01

    Three articles comprise a section on personnel preparation in vocational education. Articles deal with two inservice programs in career/vocational education for the handicapped and a project to train paraprofessionals to assist special educators in vocational education. (CL)

  13. Application of monte-carlo method in definition of key categories of most radioactive polluted soil

    Energy Technology Data Exchange (ETDEWEB)

    Mahmudov, H M; Valibeyova, G; Jafarov, Y D; Musaeva, Sh Z [Institute of Radiation Problems, Azerbaijan National Academy of Sciences, Baku (Azerbaijan); others, and

    2006-10-15

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capasites of radiation and data on activity within the boundaries of their individual density of frequency distribution of exposition doses capacities.The analysis using Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainly in reports.Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report.Relative uncertainly of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of resources available for preparation and to prepare possible estimations for the most significant categories of sources.Usage of the notion {sup u}ncertainty{sup i}n reports also allows to set threshold value for a key category of sources, if it necessary, for exact reflection of 90 per cent uncertainty in reports.According to radiation safety norms level of radiation backgrounds exceeding 33 mkR/hour is considered dangerous.By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of polluted soil.

  14. Application of monte-carlo method in definition of key categories of most radioactive polluted soil

    International Nuclear Information System (INIS)

    Mahmudov, H.M; Valibeyova, G.; Jafarov, Y.D; Musaeva, Sh.Z

    2006-01-01

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capasites of radiation and data on activity within the boundaries of their individual density of frequency distribution of exposition doses capacities.The analysis using Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainly in reports.Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report.Relative uncertainly of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of resources available for preparation and to prepare possible estimations for the most significant categories of sources.Usage of the notion u ncertainty i n reports also allows to set threshold value for a key category of sources, if it necessary, for exact reflection of 90 per cent uncertainty in reports.According to radiation safety norms level of radiation backgrounds exceeding 33 mkR/hour is considered dangerous.By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of polluted soil.

  15. Overview of the MCU Monte Carlo software package

    International Nuclear Information System (INIS)

    Kalugin, M.A.; Oleynik, D.S.; Shkarovsky, D.A.

    2013-01-01

    MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented. It is shown that the MCU constructor tool is able to assemble a full-scale 3D model from templates describing single components using simple and intuitive graphic user interface. The templates are prepared by a skilled user and stored in constructor's templates library. Ordinary user works with the graphic user interface and does not deal with MCU input data directly. At the present moment there are template libraries for several types of reactors

  16. [Osteoarthritis from long-distance running?].

    Science.gov (United States)

    Hohmann, E; Wörtler, K; Imhoff, A

    2005-06-01

    Long distance running has become a fashionable recreational activity. This study investigated the effects of external impact loading on bone and cartilage introduced by performing a marathon race. Seven beginners were compared to six experienced recreational long distance runners and two professional athletes. All participants underwent magnetic resonance imaging of the hip and knee before and after a marathon run. Coronal T1 weighted and STIR sequences were used. The pre MRI served as a baseline investigation and monitored the training effect. All athletes demonstrated normal findings in the pre run scan. All but one athlete in the beginner group demonstrated joint effusions after the race. The experienced and professional runners failed to demonstrate pathology in the post run scans. Recreational and professional long distance runners tolerate high impact forces well. Beginners demonstrate significant changes on the post run scans. Whether those findings are a result of inadequate training (miles and duration) warrant further studies. We conclude that adequate endurance training results in adaptation mechanisms that allow the athlete to compensate for the stresses introduced by long distance running and do not predispose to the onset of osteoarthritis. Significant malalignment of the lower extremity may cause increased focal loading of joint and cartilage.

  17. Running With an Elastic Lower Limb Exoskeleton.

    Science.gov (United States)

    Cherry, Michael S; Kota, Sridhar; Young, Aaron; Ferris, Daniel P

    2016-06-01

    Although there have been many lower limb robotic exoskeletons that have been tested for human walking, few devices have been tested for assisting running. It is possible that a pseudo-passive elastic exoskeleton could benefit human running without the addition of electrical motors due to the spring-like behavior of the human leg. We developed an elastic lower limb exoskeleton that added stiffness in parallel with the entire lower limb. Six healthy, young subjects ran on a treadmill at 2.3 m/s with and without the exoskeleton. Although the exoskeleton was designed to provide ~50% of normal leg stiffness during running, it only provided 24% of leg stiffness during testing. The difference in added leg stiffness was primarily due to soft tissue compression and harness compliance decreasing exoskeleton displacement during stance. As a result, the exoskeleton only supported about 7% of the peak vertical ground reaction force. There was a significant increase in metabolic cost when running with the exoskeleton compared with running without the exoskeleton (ANOVA, P exoskeletons for human running are human-machine interface compliance and the extra lower limb inertia from the exoskeleton.

  18. Metadata aided run selection at ATLAS

    International Nuclear Information System (INIS)

    Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E

    2011-01-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  19. Successful vectorization - reactor physics Monte Carlo code

    International Nuclear Information System (INIS)

    Martin, W.R.

    1989-01-01

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  20. Evolution of ATLAS conditions data and its management for LHC Run-2

    CERN Document Server

    Boehler, Michael; Formica, Andrea; Gallas, Elizabeth; Radescu, Voica

    2015-01-01

    The ATLAS detector at the LHC consists of several sub-detector systems. Both data taking and Monte Carlo (MC) simulation rely on an accurate description of the detector conditions from every subsystem, such as calibration constants, different scenarios of pile-up and noise conditions, size and position of the beam spot, etc. In order to guarantee database availability for critical online applications during data-taking, two database systems, one for online access and another one for all other database access have been implemented. The long shutdown period has provided the opportunity to review and improve the Run-1 system: revise workflows, include new and innovative monitoring and maintenance tools and implement a new database instance for Run-2 conditions data. The detector conditions are organized by tag identification strings and managed independently from the different sub-detector experts. The individual tags are then collected and associated into a global conditions tag, assuring synchronization of var...

  1. Solution preparation

    International Nuclear Information System (INIS)

    Seitz, M.G.

    1982-01-01

    Reviewed in this statement are methods of preparing solutions to be used in laboratory experiments to examine technical issues related to the safe disposal of nuclear waste from power generation. Each approach currently used to prepare solutions has advantages and any one approach may be preferred over the others in particular situations, depending upon the goals of the experimental program. These advantages are highlighted herein for three approaches to solution preparation that are currently used most in studies of nuclear waste disposal. Discussion of the disadvantages of each approach is presented to help a user select a preparation method for his particular studies. Also presented in this statement are general observations regarding solution preparation. These observations are used as examples of the types of concerns that need to be addressed regarding solution preparation. As shown by these examples, prior to experimentation or chemical analyses, laboratory techniques based on scientific knowledge of solutions can be applied to solutions, often resulting in great improvement in the usefulness of results

  2. GRS' research on clay rock in the Mont Terri underground laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Wieczorek, Klaus; Czaikowski, Oliver [Gesellschaft fuer Anlagen- und Reaktorsicherheit gGmbH, Braunschweig (Germany)

    2016-07-15

    For constructing a nuclear waste repository and for ensuring the safety requirements are met over very long time periods, thorough knowledge about the safety-relevant processes occurring in the coupled system of waste containers, engineered barriers, and the host rock is indispensable. For respectively targeted research work, the Mont Terri rock laboratory is a unique facility where repository research is performed in a clay rock environment. It is run by 16 international partners, and a great variety of questions are investigated. Some of the work which GRS as one of the Mont Terri partners is involved in is presented in this article. The focus is on thermal, hydraulic and mechanical behaviour of host rock and/or engineered barriers.

  3. Milagro Version 2 An Implicit Monte Carlo Code for Thermal Radiative Transfer: Capabilities, Development, and Usage

    Energy Technology Data Exchange (ETDEWEB)

    T.J. Urbatsch; T.M. Evans

    2006-02-15

    We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.

  4. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 6. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    . We present results from a class of criticality calculations. These problems consist of alternating arrays of fuel and moderator regions, each region being 3.0 cm thick. Forward Monte Carlo calculations were run with (a) traditional Monte Carlo using a track-length estimate of k and survival biasing (SB); (b) the new VVR method without the linear spatial term (VVR1); (c) the new VVR method without the linear spatial term, but with SB (VVR1/SB); (d) the new VVR method with the linear spatial term (VVR2); and (e) the new VVR method with the linear spatial term and with SB (VVR2/SB). The traditional Monte Carlo calculation was performed with SB since this resulted in a higher FOM than using analog Monte Carlo. We performed the adjoint calculation using a finite difference diffusion code with a fine-mesh size of Δx = 0.1 cm. The time required to perform the deterministic adjoint calculation was much less than the time required for the Monte Carlo calculation and evaluation of the variational functional and is not included in the FOM. For each problem, the new VVR method outperforms the traditional Monte Carlo method, and the VVR method with the linear spatial term performs slightly better. For the largest problem, the two VVR methods without survival biasing (SB) outperformed the traditional Monte Carlo method by a factor of 36. We note that the use of SB decreases the efficiency of the VVR method. This decrease in FOM is due to the extra cost per history of the VVR method and the longer history length incurred by using SB. However, the new VVR method still outperforms the traditional Monte Carlo calculation even when (non-optimally) used with SB. In conclusion, we have developed a new VVR method for Monte Carlo criticality calculations. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with

  5. Study of Gamma spectra by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Cantaragiu, A.; Gheorghies, A.; Borcia, C.

    2008-01-01

    The purpose of this paper is obtaining gamma ray spectra by means of a scintillation detector applying the Monte Carlo statistic simulation method using the EGS4 program. The Monte Carlo algorithm implies that the physical system is described by the probability density function which allows generating random figures and the result is taken as an average of numbers which were observed. The EGS4 program allows the simulation of the following physical processes: the photo-electrical effect, the Compton effect, the electron positron pairs generation and the Rayleigh diffusion. The gamma rays recorded by the detector are converted into electrical pulses and the gamma ray spectra are acquired and processed by means of the Nomad Plus portable spectrometer connected to a computer. As a gamma ray sources 137Cs and 60Co are used whose spectra drawn and used for study the interaction of the gamma radiations with the scintillation detector. The parameters which varied during the acquisition of the gamma ray spectra are the distance between source and detector and the measuring time. Due to the statistical processes in the detector, the peak looks like a Gauss distribution. The identification of the gamma quantum energy value is achieved by the experimental spectra peaks, thus gathering information about the position of the peak, the width and the area of the peak respectively. By means of the EGS4 program a simulation is run using these parameters and an 'ideal' spectrum is obtained, a spectrum which is not influenced by the statistical processes which take place inside the detector. Then, the convolution of the spectra is achieved by means of a normalised Gauss function. There is a close match between the experimental results and those simulated in the EGS4 program because the interactions which occurred during the simulation have a statistical behaviour close to the real one. (authors)

  6. Monte Carlo dosimetry for synchrotron stereotactic radiotherapy of brain tumours

    International Nuclear Information System (INIS)

    Boudou, Caroline; Balosso, Jacques; Esteve, Francois; Elleaume, Helene

    2005-01-01

    A radiation dose enhancement can be obtained in brain tumours after infusion of an iodinated contrast agent and irradiation with kilovoltage x-rays in tomography mode. The aim of this study was to assess dosimetric properties of the synchrotron stereotactic radiotherapy technique applied to humans (SSR) for preparing clinical trials. We designed an interface for dose computation based on a Monte Carlo code (MCNPX). A patient head was constructed from computed tomography (CT) data and a tumour volume was modelled. Dose distributions were calculated in SSR configuration for various energy beam and iodine content in the target volume. From the calculations, it appears that the iodine-filled target (10 mg ml -1 ) can be efficiently irradiated by a monochromatic beam of energy ranging from 50 to 85 keV. This paper demonstrates the feasibility of stereotactic radiotherapy for treating deep-seated brain tumours with monoenergetic x-rays from a synchrotron

  7. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  8. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  9. Off-diagonal expansion quantum Monte Carlo.

    Science.gov (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  10. Reflections on early Monte Carlo calculations

    International Nuclear Information System (INIS)

    Spanier, J.

    1992-01-01

    Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances

  11. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  12. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  13. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  14. SPQR: a Monte Carlo reactor kinetics code

    International Nuclear Information System (INIS)

    Cramer, S.N.; Dodds, H.L.

    1980-02-01

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  15. Modelling of an industrial environment, part 1.: Monte Carlo simulations of photon transport

    International Nuclear Information System (INIS)

    Kis, Z.; Eged, K.; Meckbach, R.; Voigt, G.

    2002-01-01

    After a nuclear accident releasing radioactive material into the environment the external exposures may contribute significantly to the radiation exposure of the population (UNSCEAR 1988, 2000). For urban populations the external gamma exposure from radionuclides deposited on the surfaces of the urban-industrial environments yields the dominant contributions to the total dose to the public (Kelly 1987; Jacob and Meckbach 1990). The radiation field is naturally influenced by the environment around the sources. For calculations of the shielding effect of the structures in complex and realistic urban environments Monte Carlo methods turned out to be useful tools (Jacob and Meckbach 1987; Meckbach et al. 1988). Using these methods a complex environment can be set up in which the photon transport can be solved on a reliable way. The accuracy of the methods is in principle limited only by the knowledge of the atomic cross sections and the computational time. Several papers using Monte Carlo results for calculating doses from the external gamma exposures were published (Jacob and Meckbach 1987, 1990; Meckbach et al. 1988; Rochedo et al. 1996). In these papers the Monte Carlo simulations were run in urban environments and for different photon energies. The industrial environment can be defined as such an area where productive and/or commercial activity is carried out. A good example can be a factory or a supermarket. An industrial environment can rather be different from the urban ones as for the types and structures of the buildings and their dimensions. These variations will affect the radiation field of this environment. Hence there is a need to run new Monte Carlo simulations designed specially for the industrial environments

  16. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  17. Continuous-time quantum Monte Carlo impurity solvers

    Science.gov (United States)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.

  18. OGRE, Monte-Carlo System for Gamma Transport Problems

    International Nuclear Information System (INIS)

    1984-01-01

    1 - Nature of physical problem solved: The OGRE programme system was designed to calculate, by Monte Carlo methods, any quantity related to gamma-ray transport. The system is represented by two examples - OGRE-P1 and OGRE-G. The OGRE-P1 programme is a simple prototype which calculates dose rate on one side of a slab due to a plane source on the other side. The OGRE-G programme, a prototype of a programme utilizing a general-geometry routine, calculates dose rate at arbitrary points. A very general source description in OGRE-G may be employed by reading a tape prepared by the user. 2 - Method of solution: Case histories of gamma rays in the prescribed geometry are generated and analyzed to produce averages of any desired quantity which, in the case of the prototypes, are gamma-ray dose rates. The system is designed to achieve generality by ease of modification. No importance sampling is built into the prototypes, a very general geometry subroutine permits the treatment of complicated geometries. This is essentially the same routine used in the O5R neutron transport system. Boundaries may be either planes or quadratic surfaces, arbitrarily oriented and intersecting in arbitrary fashion. Cross section data is prepared by the auxiliary master cross section programme XSECT which may be used to originate, update, or edit the master cross section tape. The master cross section tape is utilized in the OGRE programmes to produce detailed tables of macroscopic cross sections which are used during the Monte Carlo calculations. 3 - Restrictions on the complexity of the problem: Maximum cross-section array information may be estimated by a given formula for a specific problem. The number of regions must be less than or equal to 50

  19. The ATLAS Tau Trigger Performance during LHC Run 1 and Prospects for Run 2

    CERN Document Server

    Mitani, T; The ATLAS collaboration

    2016-01-01

    The ATLAS tau trigger is designed to select hadronic decays of the tau leptons. Tau lepton plays an important role in Standard Model (SM) physics, such as in Higgs boson decays. Tau lepton is also important in beyond the SM (BSM) scenarios, such as supersymmetry and exotic particles, as they are often produced preferentially in these models. During the 2010-2012 LHC run (Run1), the tau trigger was accomplished successfully, which leads several rewarding results such as evidence for $H\\rightarrow \\tau\\tau$. From the 2015 LHC run (Run2), LHC will be upgraded and overlapping interactions per bunch crossing (pile-up) are expected to increase by a factor two. It will be challenging to control trigger rates while keeping interesting physics events. This paper summarized the tau trigger performance in Run1 and its prospects for Run2.

  20. Monte Carlo simulation applied to alpha spectrometry

    International Nuclear Information System (INIS)

    Baccouche, S.; Gharbi, F.; Trabelsi, A.

    2007-01-01

    Alpha particle spectrometry is a widely-used analytical method, in particular when we deal with pure alpha emitting radionuclides. Monte Carlo simulation is an adequate tool to investigate the influence of various phenomena on this analytical method. We performed an investigation of those phenomena using the simulation code GEANT of CERN. The results concerning the geometrical detection efficiency in different measurement geometries agree with analytical calculations. This work confirms that Monte Carlo simulation of solid angle of detection is a very useful tool to determine with very good accuracy the detection efficiency.

  1. Simplified monte carlo simulation for Beijing spectrometer

    International Nuclear Information System (INIS)

    Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei

    1986-01-01

    The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES

  2. Self-learning Monte Carlo (dynamical biasing)

    International Nuclear Information System (INIS)

    Matthes, W.

    1981-01-01

    In many applications the histories of a normal Monte Carlo game rarely reach the target region. An approximate knowledge of the importance (with respect to the target) may be used to guide the particles more frequently into the target region. A Monte Carlo method is presented in which each history contributes to update the importance field such that eventually most target histories are sampled. It is a self-learning method in the sense that the procedure itself: (a) learns which histories are important (reach the target) and increases their probability; (b) reduces the probabilities of unimportant histories; (c) concentrates gradually on the more important target histories. (U.K.)

  3. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  4. Improvements for Monte Carlo burnup calculation

    Energy Technology Data Exchange (ETDEWEB)

    Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)

    2015-07-01

    Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)

  5. A keff calculation method by Monte Carlo

    International Nuclear Information System (INIS)

    Shen, H; Wang, K.

    2008-01-01

    The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)

  6. Monte Carlo electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs

  7. Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Seeger, P.A.

    1995-01-01

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width

  8. Monte Carlo applications to radiation shielding problems

    International Nuclear Information System (INIS)

    Subbaiah, K.V.

    2009-01-01

    Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation

  9. Simulation of transport equations with Monte Carlo

    International Nuclear Information System (INIS)

    Matthes, W.

    1975-09-01

    The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game

  10. Monte Carlo dose distributions for radiosurgery

    International Nuclear Information System (INIS)

    Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.

    2001-01-01

    The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)

  11. Not Just Running: Coping with and Managing Everyday Life through Road-Running

    OpenAIRE

    Cook, Simon

    2014-01-01

    From the external form, running looks like running. Yet this alikeness masks a hugely divergent practice consisting of different movements, meanings and experiences. In this paper I wish to shed light upon some of these different ‘ways of running’ and in turn identify a range of the sometimes surprising, sometimes significant and sometimes banal benefits that road-running can gift its practitioners beyond simply exercise and physical fitness. Drawing on an innovative mapping and ethnographic ...

  12. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  13. Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA

    International Nuclear Information System (INIS)

    Roubicek, P.

    1982-01-01

    Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)

  14. Students' Gender Stereotypes about Running in Schools

    Science.gov (United States)

    Xiang, Ping; McBride, Ron E.; Lin, Shuqiong; Gao, Zan; Francis, Xueying

    2018-01-01

    Two hundred forty-six students (132 boys, 114 girls) were tracked from fifth to eighth grades, and changes in gender stereotypes about running as a male sport, running performance, interest in running, and intention for future running participation were assessed. Results revealed that neither sex held gender stereotypes about running as a male…

  15. ALICE HLT Run 2 performance overview.

    Science.gov (United States)

    Krzewicki, Mikolaj; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.

  16. The Run-2 ATLAS Trigger System

    International Nuclear Information System (INIS)

    Martínez, A Ruiz

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in up to five times higher rates of processes of interest. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event processing farm. A few examples will be shown, such as the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy. Finally, the status of the commissioning of the trigger system and its performance during the 2015 run will be presented. (paper)

  17. Exercise economy in skiing and running

    Directory of Open Access Journals (Sweden)

    Thomas eLosnegard

    2014-01-01

    Full Text Available Substantial inter-individual variations in exercise economy exist even in highly trained endurance athletes. The variation is believed to be determined partly by intrinsic factors. Therefore, in the present study, we compared exercise economy in V2-skating, double poling and uphill running. Ten highly trained male cross-country skiers (23 ± 3 years, 180 ± 6 cm, 75 ± 8 kg, VO2peak running: 76.3 ± 5.6 mL•kg-1•min-1 participated in the study. Exercise economy and VO2peak during treadmill running, ski skating (V2 technique and double poling were compared based on correlation analysis with subsequent criteria for interpreting the magnitude of correlation (r. There was a very large correlation in exercise economy between V2-skating and double poling (r = 0.81 and a large correlation between V2-skating and running (r = 0.53 and double poling and running (r = 0.58. There were trivial to moderate correlations between exercise economy and VO2peak (r = 0.00-0.23, cycle rate (r = 0.03-0.46, body mass (r = -0.09-0.46 and body height (r = 0.11-0.36. In conclusion, the inter-individual variation in exercise economy could only moderately be explained by differences in VO2peak, body mass and body height and therefore we suggest that other intrinsic factors contribute to the variation in exercise economy between highly trained subjects.

  18. The CMS trigger in Run 2

    CERN Document Server

    Tosi, Mia

    2018-01-01

    During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2$\\times 10^{34}$~cm$^{-2}s^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.\\\\ In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT went through big improvements; in particular, new ap...

  19. Chaotic inflation with curvaton induced running

    DEFF Research Database (Denmark)

    Sloth, Martin Snoager

    2014-01-01

    While dust contamination now appears as a likely explanation of the apparent tension between the recent BICEP2 data and the Planck data, we will here explore the consequences of a large running in the spectral index as suggested by the BICEP2 collaboration as an alternative explanation of the app......While dust contamination now appears as a likely explanation of the apparent tension between the recent BICEP2 data and the Planck data, we will here explore the consequences of a large running in the spectral index as suggested by the BICEP2 collaboration as an alternative explanation...... of the apparent tension, but which would be in conflict with prediction of the simplest model of chaotic inflation. The large field chaotic model is sensitive to UV physics, and the nontrivial running of the spectral index suggested by the BICEP2 collaboration could therefore, if true, be telling us some...... the possibility that the running could be due to some other less UV sensitive degree of freedom. As an example, we ask if it is possible that the curvature perturbation spectrum has a contribution from a curvaton, which makes up for the large running in the spectrum. We find that this effect could mask...

  20. Habitual Minimalist Shod Running Biomechanics and the Acute Response to Running Barefoot.

    Science.gov (United States)

    Tam, Nicholas; Darragh, Ian A J; Divekar, Nikhil V; Lamberts, Robert P

    2017-09-01

    The aim of the study was to determine whether habitual minimalist shoe runners present with purported favorable running biomechanithat reduce running injury risk such as initial loading rate. Eighteen minimalist and 16 traditionally cushioned shod runners were assessed when running both in their preferred training shoe and barefoot. Ankle and knee joint kinetics and kinematics, initial rate of loading, and footstrike angle were measured. Sagittal ankle and knee joint stiffness were also calculated. Results of a two-factor ANOVA presented no group difference in initial rate of loading when participants were running either shod or barefoot; however, initial loading rate increased for both groups when running barefoot (p=0.008). Differences in footstrike angle were observed between groups when running shod, but not when barefoot (minimalist:8.71±8.99 vs. traditional: 17.32±11.48 degrees, p=0.002). Lower ankle joint stiffness was found in both groups when running barefoot (p=0.025). These findings illustrate that risk factors for injury potentially differ between the two groups. Shoe construction differences do change mechanical demands, however, once habituated to the demands of a given shoe condition, certain acute favorable or unfavorable responses may be moderated. The purported benefits of minimalist running shoes in mimicking habitual barefoot running is questioned, and risk of injury may not be attenuated. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    Science.gov (United States)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  2. The running pattern and its importance in running long-distance gears

    Directory of Open Access Journals (Sweden)

    Jarosław Hoffman

    2017-07-01

    Full Text Available The running pattern is individual for each runner, regardless of distance. We can characterize it as the sum of the data of the runner (age, height, training time, etc. and the parameters of his run. Building the proper technique should focus first and foremost on the work of movement coordination and the power of the runner. In training the correct running steps we can use similar tools as working on deep feeling. The aim of this paper was to define what we can call a running pattern, what is its influence in long-distance running, and the relationship between the training technique and the running pattern. The importance of a running pattern in long-distance racing is immense, as the more distracted and departed from the norm, the greater the harm to the body will cause it to repetition in long run. Putting on training exercises that shape the technique is very important and affects the running pattern significantly.

  3. Transport of mass goods on the top run and bottom run of belt conveyors

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, D

    1977-06-01

    For combined coal winning from the collieries 'General Blumenthal' and 'Ewald Fortsetzung' a large belt conveyor plant was taken into operation which is able to transport 1360 tons/h in the top run and 300 tons/h of dirt in the bottom run. The different types of coal are transported separately in intermittent operation with the aid of bunker systems connected to the front and rear of the belt conveyor. Persons can be transported in the top run as well as in the bottom run.

  4. Evolution of ATLAS conditions data and its management for LHC Run-2

    International Nuclear Information System (INIS)

    Böhler, Michael; Borodin, Mikhail; Formica, Andrea; Gallas, Elizabeth; Radescu, Voica

    2015-01-01

    The ATLAS detector at the LHC consists of several sub-detector systems. Both data taking and Monte Carlo (MC) simulation rely on an accurate description of the detector conditions from every subsystem, such as calibration constants, different scenarios of pile-up and noise conditions, size and position of the beam spot, etc. In order to guarantee database availability for critical online applications during data-taking, two database systems, one for online access and another one for all other database access, have been implemented.The long shutdown period has provided the opportunity to review and improve the Run-1 system: revise workflows, include new and innovative monitoring and maintenance tools and implement a new database instance for Run-2 conditions data. The detector conditions are organized by tag identification strings and managed independently by the different sub-detector experts. The individual tags are then collected and associated into a global conditions tag, assuring synchronization of various sub-detector improvements. Furthermore, a new concept was introduced to maintain conditions over all different data run periods into a single tag, by using Interval of Validity (IOV) dependent detector conditions for the MC database as well. This allows on the fly preservation of past conditions for data and MC and assures their sustainability with software evolution.This paper presents an overview of the commissioning of the new database instance, improved tools and workflows, and summarizes the actions taken during the Run-2 commissioning phase in the beginning of 2015. (paper)

  5. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  6. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  7. RefDB: The Reference Database for CMS Monte Carlo Production

    CERN Document Server

    Lefébure, V

    2003-01-01

    RefDB is the CMS Monte Carlo Reference Database. It is used for recording and managing all details of physics simulation, reconstruction and analysis requests, for coordinating task assignments to world-wide distributed Regional Centers, Grid-enabled or not, and trace their progress rate. RefDB is also the central database that the workflow-planner contacts in order to get task instructions. It is automatically and asynchronously updated with book-keeping run summaries. Finally it is the end-user interface to data catalogues.

  8. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  9. Cumulative percent energy deposition of photon beam incident on different targets, simulated by Monte Carlo

    International Nuclear Information System (INIS)

    Kandic, A.; Jevremovic, T.; Boreli, F.

    1989-01-01

    Monte Carlo simulation (without secondary radiation) of the standard photon interactions (Compton scattering, photoelectric absorption and pair protection) for the complex slab's geometry is used in numerical code ACCA. A typical ACCA run will yield: (a) transmission of primary photon radiation differential in energy, (b) the spectrum of energy deposited in the target as a function of position and (c) the cumulative percent energy deposition as a function of position. A cumulative percent energy deposition of photon monoenergetic beam incident on simplest and complexity tissue slab and Fe slab are presented in this paper. (author). 5 refs.; 2 figs

  10. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)

    2007-07-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  11. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.

    2007-01-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  12. Applying graphics processor units to Monte Carlo dose calculation in radiation therapy

    Directory of Open Access Journals (Sweden)

    Bakhtiari M

    2010-01-01

    Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.

  13. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation

    International Nuclear Information System (INIS)

    Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin

    2015-01-01

    Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes

  14. Development of Monte Carlo-based pebble bed reactor fuel management code

    International Nuclear Information System (INIS)

    Setiadipura, Topan; Obara, Toru

    2014-01-01

    Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower k eff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher k eff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed

  15. Is running associated with degenerative joint disease?

    International Nuclear Information System (INIS)

    Panush, R.S.; Schmidt, C.; Caldwell, J.R.

    1986-01-01

    Little information is available regarding the long-term effects, if any, of running on the musculoskeletal system. The authors compared the prevalence of degenerative joint disease among 17 male runners with 18 male nonrunners. Running subjects (53% marathoners) ran a mean of 44.8 km (28 miles)/wk for 12 years. Pain and swelling of hips, knees, ankles and feet and other musculoskeletal complaints among runners were comparable with those among nonrunners. Radiologic examinations (for osteophytes, cartilage thickness, and grade of degeneration) also were without notable differences among groups. They did not find an increased prevalence of osteoarthritis among the runners. Our observations suggest that long-duration, high-mileage running need to be associated with premature degenerative joint disease in the lower extremities

  16. Jefferson Lab Data Acquisition Run Control System

    International Nuclear Information System (INIS)

    Vardan Gyurjyan; Carl Timmer; David Abbott; William Heyes; Edward Jastrzembski; David Lawrence; Elliott Wolin

    2004-01-01

    A general overview of the Jefferson Lab data acquisition run control system is presented. This run control system is designed to operate the configuration, control, and monitoring of all Jefferson Lab experiments. It controls data-taking activities by coordinating the operation of DAQ sub-systems, online software components and third-party software such as external slow control systems. The main, unique feature which sets this system apart from conventional systems is its incorporation of intelligent agent concepts. Intelligent agents are autonomous programs which interact with each other through certain protocols on a peer-to-peer level. In this case, the protocols and standards used come from the domain-independent Foundation for Intelligent Physical Agents (FIPA), and the implementation used is the Java Agent Development Framework (JADE). A lightweight, XML/RDF-based language was developed to standardize the description of the run control system for configuration purposes

  17. Instrumental Variables in the Long Run

    DEFF Research Database (Denmark)

    Casey, Gregory; Klemp, Marc Patrick Brag

    2017-01-01

    In the study of long-run economic growth, it is common to use historical or geographical variables as instruments for contemporary endogenous regressors. We study the interpretation of these conventional instrumental variable (IV) regressions in a general, yet simple, framework. Our aim...... quantitative implications for the field of long-run economic growth. We also use our framework to examine related empirical techniques. We find that two prominent regression methodologies - using gravity-based instruments for trade and including ancestry-adjusted variables in linear regression models - have...... is to estimate the long-run causal effect of changes in the endogenous explanatory variable. We find that conventional IV regressions generally cannot recover this parameter of interest. To estimate this parameter, therefore, we develop an augmented IV estimator that combines the conventional regression...

  18. Non-analogue Monte Carlo method, application to neutron simulation; Methode de Monte Carlo non analogue, application a la simulation des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Morillon, B.

    1996-12-31

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.

  19. The NLstart2run study: running related injuries in novice runners : Running related injuries in novice runners

    NARCIS (Netherlands)

    Kluitenberg, Bas

    2015-01-01

    Hardlopen is wereldwijd een populaire sport welke vaak wordt beoefend voor de positieve gezondheidseffecten. Er is echter een keerzijde. Hardlopers worden vaak geplaagd door blessures. Een probleem waar veelal beginners tegenaan lopen. Dit proefschrift beschrijft de NLstart2run studie, een onderzoek

  20. Parallel processing Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1994-01-01

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine

  1. Monte Carlo determination of heteroepitaxial misfit structures

    DEFF Research Database (Denmark)

    Baker, J.; Lindgård, Per-Anker

    1996-01-01

    We use Monte Carlo simulations to determine the structure of KBr overlayers on a NaCl(001) substrate, a system with large (17%) heteroepitaxial misfit. The equilibrium relaxation structure is determined for films of 2-6 ML, for which extensive helium-atom scattering data exist for comparison...

  2. The Monte Carlo applied for calculation dose

    International Nuclear Information System (INIS)

    Peixoto, J.E.

    1988-01-01

    The Monte Carlo method is showed for the calculation of absorbed dose. The trajectory of the photon is traced simulating sucessive interaction between the photon and the substance that consist the human body simulator. The energy deposition in each interaction of the simulator organ or tissue per photon is also calculated. (C.G.C.) [pt

  3. Monte Carlo code for neutron radiography

    International Nuclear Information System (INIS)

    Milczarek, Jacek J.; Trzcinski, Andrzej; El-Ghany El Abd, Abd; Czachor, Andrzej

    2005-01-01

    The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms

  4. Monte Carlo code for neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Milczarek, Jacek J. [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)]. E-mail: jjmilcz@cyf.gov.pl; Trzcinski, Andrzej [Institute for Nuclear Studies, Swierk, 05-400 Otwock (Poland); El-Ghany El Abd, Abd [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland); Nuclear Research Center, PC 13759, Cairo (Egypt); Czachor, Andrzej [Institute of Atomic Energy, Swierk, 05-400 Otwock (Poland)

    2005-04-21

    The concise Monte Carlo code, MSX, for simulation of neutron radiography images of non-uniform objects is presented. The possibility of modeling the images of objects with continuous spatial distribution of specific isotopes is included. The code can be used for assessment of the scattered neutron component in neutron radiograms.

  5. Monte Carlo method in neutron activation analysis

    International Nuclear Information System (INIS)

    Majerle, M.; Krasa, A.; Svoboda, O.; Wagner, V.; Adam, J.; Peetermans, S.; Slama, O.; Stegajlov, V.I.; Tsupko-Sitnikov, V.M.

    2009-01-01

    Neutron activation detectors are a useful technique for the neutron flux measurements in spallation experiments. The study of the usefulness and the accuracy of this method at similar experiments was performed with the help of Monte Carlo codes MCNPX and FLUKA

  6. Atomistic Monte Carlo simulation of lipid membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....

  7. Monte Carlo methods beyond detailed balance

    NARCIS (Netherlands)

    Schram, Raoul D.; Barkema, Gerard T.|info:eu-repo/dai/nl/101275080

    2015-01-01

    Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying

  8. Monte Carlo studies of ZEPLIN III

    CERN Document Server

    Dawson, J; Davidge, D C R; Gillespie, J R; Howard, A S; Jones, W G; Joshi, M; Lebedenko, V N; Sumner, T J; Quenby, J J

    2002-01-01

    A Monte Carlo simulation of a two-phase xenon dark matter detector, ZEPLIN III, has been achieved. Results from the analysis of a simulated data set are presented, showing primary and secondary signal distributions from low energy gamma ray events.

  9. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-12-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  10. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  11. Dynamic bounds coupled with Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)

    2011-02-15

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.

  12. Dynamic bounds coupled with Monte Carlo simulations

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.

    2011-01-01

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper

  13. Design and analysis of Monte Carlo experiments

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Gentle, J.E.; Haerdle, W.; Mori, Y.

    2012-01-01

    By definition, computer simulation or Monte Carlo models are not solved by mathematical analysis (such as differential calculus), but are used for numerical experimentation. The goal of these experiments is to answer questions about the real world; i.e., the experimenters may use their models to

  14. Some problems on Monte Carlo method development

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on some problems of Monte Carlo method development. The content consists of deep-penetration problems, unbounded estimate problems, limitation of Mdtropolis' method, dependency problem in Metropolis' method, random error interference problems and random equations, intellectualisation and vectorization problems of general software

  15. Monte Carlo simulations in theoretical physic

    International Nuclear Information System (INIS)

    Billoire, A.

    1991-01-01

    After a presentation of the MONTE CARLO method principle, the method is applied, first to the critical exponents calculations in the three dimensions ISING model, and secondly to the discrete quantum chromodynamic with calculation times in function of computer power. 28 refs., 4 tabs

  16. Monte Carlo method for random surfaces

    International Nuclear Information System (INIS)

    Berg, B.

    1985-01-01

    Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)

  17. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  18. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  19. Biases in Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here

  20. Monte Carlo studies of uranium calorimetry

    International Nuclear Information System (INIS)

    Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.

    1985-01-01

    Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references

  1. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  2. Abort Gap Cleaning for LHC Run 2

    Energy Technology Data Exchange (ETDEWEB)

    Uythoven, Jan [CERN; Boccardi, Andrea [CERN; Bravin, Enrico [CERN; Goddard, Brennan [CERN; Hemelsoet, Georges-Henry [CERN; Höfle, Wolfgang [CERN; Jacquet, Delphine [CERN; Kain, Verena [CERN; Mazzoni, Stefano [CERN; Meddahi, Malika [CERN; Valuch, Daniel [CERN; Gianfelice-Wendt, Eliana [Fermilab

    2014-07-01

    To minimize the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to the applied cleaning algorithms.

  3. Luminosity Measurements at LHCb for Run II

    CERN Multimedia

    Coombs, George

    2018-01-01

    A precise measurement of the luminosity is a necessary component of many physics analyses, especially cross-section measurements. At LHCb two different direct measurement methods are used to determine the luminosity: the “van der Meer scan” (VDM) and the “Beam Gas Imaging” (BGI) methods. A combined result from these two methods gave a precision of less than 2% for Run I and efforts are ongoing to provide a similar result for Run II. Fixed target luminosity is determined with an indirect method based on the single electron scattering cross-section.

  4. Abort Gap Cleaning for LHC Run 2

    CERN Document Server

    Uythoven, J; Bravin, E; Goddard, B; Hemelsoet, GH; Höfle, W; Jacquet, D; Kain, V; Mazzoni, S; Meddahi, M; Valuch, D

    2015-01-01

    To minimise the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to the applied cleaning algorithms.

  5. Running-mass inflation model and WMAP

    International Nuclear Information System (INIS)

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.

    2004-01-01

    We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable

  6. Causal Analysis of Railway Running Delays

    DEFF Research Database (Denmark)

    Cerreto, Fabrizio; Nielsen, Otto Anker; Harrod, Steven

    Operating delays and network propagation are inherent characteristics of railway operations. These are traditionally reduced by provision of time supplements or “slack” in railway timetables and operating plans. Supplement allocation policies must trade off reliability in the service commitments...... Denmark (the Danish infrastructure manager). The statistical analysis of the data identifies the minimum running times and the scheduled running time supplements and investigates the evolution of train delays along given train paths. An improved allocation of time supplements would result in smaller...

  7. Application of Monte-Carlo method in definition of key categories of most radioactive polluted soil

    International Nuclear Information System (INIS)

    Mahmudov, H.M.; Valibeyova, G.; Jafarov, Y.D.; Musaeva, Sh.Z.

    2006-01-01

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capacities of radiation and data on activity within the boundaries of their individual density of frequency distribution upon corresponding sizes of exposition doses capacities. This procedure repeats for many times using computer and results of each round of calculations create universal density of frequency distribution of exposition doses capacities. The analysis using Monte Carlo method can be carried out at the level of radiation polluted soil categories. The analysis by Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainty in reports. Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report. Relative uncertainty of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of a confidential interval are asymmetric. It is important to determine key categories of radiation polluted soil to establish priorities to use reports of resources available for preparation and to prepare possible estimations for the most significant categories of sources. Usage of the notion u ncertainty i n reports also allows to set threshold value for a key category of sources, if it is necessary, for exact reflection of 90 percent uncertainty in reports. According to radiation safety norms level of radiation background exceeding 33 mkR/hour is considered dangerous. By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of

  8. Application of Monte-Carlo method in definition of key categories of most radioactive polluted soil

    Energy Technology Data Exchange (ETDEWEB)

    Mahmudov, H M; Valibeyova, G; Jafarov, Y D; Musaeva, Sh Z [Institute of Radiation Problems, Azerbaijan National Academy of Sciences, Baku (Azerbaijan)

    2006-11-15

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capacities of radiation and data on activity within the boundaries of their individual density of frequency distribution upon corresponding sizes of exposition doses capacities. This procedure repeats for many times using computer and results of each round of calculations create universal density of frequency distribution of exposition doses capacities. The analysis using Monte Carlo method can be carried out at the level of radiation polluted soil categories. The analysis by Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainty in reports. Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report. Relative uncertainty of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of a confidential interval are asymmetric. It is important to determine key categories of radiation polluted soil to establish priorities to use reports of resources available for preparation and to prepare possible estimations for the most significant categories of sources. Usage of the notion {sup u}ncertainty{sup i}n reports also allows to set threshold value for a key category of sources, if it is necessary, for exact reflection of 90 percent uncertainty in reports. According to radiation safety norms level of radiation background exceeding 33 mkR/hour is considered dangerous. By calted Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of

  9. The design of the run Clever randomized trial

    DEFF Research Database (Denmark)

    Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik

    2016-01-01

    BACKGROUND: Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need...... evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running...... and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. METHODS/DESIGN: The Run Clever trial is a randomized trial with a 24-week...

  10. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  11. Short-run and long-run elasticities of import demand for crude oil in Turkey

    International Nuclear Information System (INIS)

    Altinay, Galip

    2007-01-01

    The aim of this study is to attempt to estimate the short-run and the long-run elasticities of demand for crude oil in Turkey by the recent autoregressive distributed lag (ARDL) bounds testing approach to cointegration. As a developing country, Turkey meets its growing demand for oil principally by foreign suppliers. Thus, the study focuses on modelling the demand for imported crude oil using annual data covering the period 1980-2005. The bounds test results reveal that a long-run cointegration relationship exists between the crude oil import and the explanatory variables: nominal price and income, but not in the model that includes real price in domestic currency. The long-run parameters are estimated through a long-run static solution of the estimated ARDL model, and then the short-run dynamics are estimated by the error correction model. The estimated models pass the diagnostic tests successfully. The findings reveal that the income and price elasticities of import demand for crude oil are inelastic both in the short run and in the long run

  12. Short-Run and Long-Run Elasticities of Diesel Demand in Korea

    Directory of Open Access Journals (Sweden)

    Seung-Hoon Yoo

    2012-11-01

    Full Text Available This paper investigates the demand function for diesel in Korea covering the period 1986–2011. The short-run and long-run elasticities of diesel demand with respect to price and income are empirically examined using a co-integration and error-correction model. The short-run and long-run price elasticities are estimated to be −0.357 and −0.547, respectively. The short-run and long-run income elasticities are computed to be 1.589 and 1.478, respectively. Thus, diesel demand is relatively inelastic to price change and elastic to income change in both the short-run and long-run. Therefore, a demand-side management through raising the price of diesel will be ineffective and tightening the regulation of using diesel more efficiently appears to be more effective in Korea. The demand for diesel is expected to continuously increase as the economy grows.

  13. Change in running kinematics after cycling are related to alterations in running economy in triathletes.

    Science.gov (United States)

    Bonacci, Jason; Green, Daniel; Saunders, Philo U; Blanch, Peter; Franettovich, Melinda; Chapman, Andrew R; Vicenzino, Bill

    2010-07-01

    Emerging evidence suggests that cycling may influence neuromuscular control during subsequent running but the relationship between altered neuromuscular control and run performance in triathletes is not well understood. The aim of this study was to determine if a 45 min high-intensity cycle influences lower limb movement and muscle recruitment during running and whether changes in limb movement or muscle recruitment are associated with changes in running economy (RE) after cycling. RE, muscle activity (surface electromyography) and limb movement (sagittal plane kinematics) were compared between a control run (no preceding cycle) and a run performed after a 45 min high-intensity cycle in 15 moderately trained triathletes. Muscle recruitment and kinematics during running after cycling were altered in 7 of 15 (46%) triathletes. Changes in kinematics at the knee and ankle were significantly associated with the change in VO(2) after cycling (precruitment in some triathletes and that changes in kinematics, especially at the ankle, are closely related to alterations in running economy after cycling. Copyright 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  14. Comparison of fractions of inactive modules between Run1 and Run2

    CERN Document Server

    Motohashi, Kazuki; The ATLAS collaboration

    2015-01-01

    Fraction of inactive modules for each component of the ATLAS pixel detector at the end of Run 1 and the beginning of Run 2. A similar plot which uses a result of functionality tests during LS1 can be found in ATL-INDET-SLIDE-2014-388.

  15. Weekly running volume and risk of running-related injuries among marathon runners

    DEFF Research Database (Denmark)

    Rasmussen, Christina Haugaard; Nielsen, R.O.; Juul, Martin Serup

    2013-01-01

    The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race.......The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race....

  16. Weekly running volume and risk of running-related injuries among marathon runners

    DEFF Research Database (Denmark)

    Rasmussen, Christina Haugaard; Nielsen, Rasmus Østergaard; Juul, Martin Serup

    2013-01-01

    PURPOSEBACKGROUND: The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race.......PURPOSEBACKGROUND: The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race....

  17. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

  18. Towards scalable parallelism in Monte Carlo particle transport codes using remote memory access

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit; Brown, Forrest

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, we investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. Initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations. (author)

  19. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul

    2014-01-01

    . Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost

  20. Prospect on general software of Monte Carlo method

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method

  1. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming

    2009-01-01

    in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method

  2. Uncertainties in models of tropospheric ozone based on Monte Carlo analysis: Tropospheric ozone burdens, atmospheric lifetimes and surface distributions

    Science.gov (United States)

    Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.

    2018-05-01

    Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although

  3. Running and Osteoarthritis: Does Recreational or Competitive Running Increase the Risk?

    Science.gov (United States)

    2017-06-01

    Exercise, like running, is good for overall health and, specifically, our hearts, lungs, muscles, bones, and brains. However, some people are concerned about the impact of running on longterm joint health. Does running lead to higher rates of arthritis in knees and hips? While many researchers find that running protects bone health, others are concerned that this exercise poses a high risk for age-related changes to hips and knees. A study published in the June 2017 issue of JOSPT suggests that the difference in these outcomes depends on the frequency and intensity of running. J Orthop Sports Phys Ther 2017;47(6):391. doi:10.2519/jospt.2017.0505.

  4. Split-phase motor running as capacitor starts motor and as capacitor run motor

    Directory of Open Access Journals (Sweden)

    Yahaya Asizehi ENESI

    2016-07-01

    Full Text Available In this paper, the input parameters of a single phase split-phase induction motor is taken to investigate and to study the output performance characteristics of capacitor start and capacitor run induction motor. The value of these input parameters are used in the design characteristics of capacitor run and capacitor start motor with each motor connected to rated or standard capacitor in series with auxiliary winding or starting winding respectively for the normal operational condition. The magnitude of capacitor that will develop maximum torque in capacitor start motor and capacitor run motor are investigated and determined by simulation. Each of these capacitors is connected to the auxiliary winding of split-phase motor thereby transforming it into capacitor start or capacitor run motor. The starting current and starting torque of the split-phase motor (SPM, capacitor run motor (CRM and capacitor star motor (CSM are compared for their suitability in their operational performance and applications.

  5. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  6. Long-Run Neutrality and Superneutrality in an ARIMA Framework.

    OpenAIRE

    Fisher, Mark E; Seater, John J

    1993-01-01

    The authors formalize long-run neutrality and long-run superneutrality in the context of a bivariate ARIMA model; show how the restrictions implied by long-run neutrality and long-run superneutrality depend on the orders of integration of the variables; apply their analysis to previous work, showing how that work is related to long-run neutrality and long-run superneutrality; and provide some new evidence on long-run neutrality and long-run superneutrality. Copyright 1993 by American Economic...

  7. Habituation contributes to the decline in wheel running within wheel-running reinforcement periods.

    Science.gov (United States)

    Belke, Terry W; McLaughlin, Ryan J

    2005-02-28

    Habituation appears to play a role in the decline in wheel running within an interval. Aoyama and McSweeney [Aoyama, K., McSweeney, F.K., 2001. Habituation contributes to within-session changes in free wheel running. J. Exp. Anal. Behav. 76, 289-302] showed that when a novel stimulus was presented during a 30-min interval, wheel-running rates following the stimulus increased to levels approximating those earlier in the interval. The present study sought to assess the role of habituation in the decline in running that occurs over a briefer interval. In two experiments, rats responded on fixed-interval 30-s schedules for the opportunity to run for 45 s. Forty reinforcers were completed in each session. In the first experiment, the brake and chamber lights were repeatedly activated and inactivated after 25 s of a reinforcement interval had elapsed to assess the effect on running within the remaining 20 s. Presentations of the brake/light stimulus occurred during nine randomly determined reinforcement intervals in a session. In the second experiment, a 110 dB tone was emitted after 25 s of the reinforcement interval. In both experiments, presentation of the stimulus produced an immediate decline in running that dissipated over sessions. No increase in running following the stimulus was observed in the first experiment until the stimulus-induced decline dissipated. In the second experiment, increases in running were observed following the tone in the first session as well as when data were averaged over several sessions. In general, the results concur with the assertion that habituation plays a role in the decline in wheel running that occurs within both long and short intervals. (c) 2004 Elsevier B.V. All rights reserved.

  8. Healthy Living Initiative: Running/Walking Club

    Science.gov (United States)

    Stylianou, Michalis; Kulinna, Pamela Hodges; Kloeppel, Tiffany

    2014-01-01

    This study was grounded in the public health literature and the call for schools to serve as physical activity intervention sites. Its purpose was twofold: (a) to examine the daily distance covered by students in a before-school running/walking club throughout 1 school year and (b) to gain insights on the teachers perspectives of the club.…

  9. The QCD Running Coupling and its Measurement

    CERN Document Server

    Altarelli, Guido

    2013-01-01

    In this lecture, after recalling the basic definitions and facts about the running coupling in QCD, I present a critical discussion of the methods for measuring $\\alpha_s$ and select those that appear to me as the most reliably precise

  10. Daytime running lights : its safety evidence revisited.

    NARCIS (Netherlands)

    Koornstra, M.J.

    1993-01-01

    Retrospective in-depth accident studies from several countries confirm that human perception errors are the main causal factor in road accidents. The share of accident types which are relevant for the effect of daytime running lights (DRL), such as overtaking and crossing accidents, in the total of

  11. 105-KE Basin Pilot Run design plan

    International Nuclear Information System (INIS)

    Sherrell, D.L.

    1994-01-01

    This document identifies all design deliverables and procedures applicable to the 105-KE Basin Pilot Run. It also establishes a general design strategy, defines interface control requirements, and covers planning for mechanical, electrical, instrument/control system, and equipment installation design

  12. The Run-2 ATLAS Trigger System

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration

    2016-01-01

    The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...

  13. Collagen gene interactions and endurance running performance

    African Journals Online (AJOL)

    to complete any of the individual components (3.8 km swim, 180 km bike or 42.2 km run) of the 226 km event. The major ... may affect normal collagen fibrillogenesis and alter the mechanical properties of ... using a XP Thermal Cycler (Block model XP-G, BIOER Technology Co.,. Japan). ..... New insights into the function of.

  14. Jet physics at CDF Run II

    Energy Technology Data Exchange (ETDEWEB)

    Safonov, A.; /UC, Davis

    2004-12-01

    The latest results on jet physics at CDF are presented and discussed. Particular attention is paid to studies of the inclusive jet cross section using 177 pb{sup -1} of Run II data. Also discussed is a study of gluon and quark jet fragmentation.

  15. EMBL rescue package keeps bioinformatics centre running

    CERN Multimedia

    Abott, A

    1999-01-01

    The threat to the EBI arising from the EC refusal to fund its running costs seems to have been temporarily lifted. At a meeting in EMBL, Heidelberg, delegates agreed in principle to make up the shortfall of 5 million euros. A final decision will be taken at a special meeting of the EMBL council in March (1 page).

  16. Measuring the running top-quark mass

    International Nuclear Information System (INIS)

    Langenfeld, Ulrich; Uwer, Peter

    2010-06-01

    In this contribution we discuss conceptual issues of current mass measurements performed at the Tevatron. In addition we propose an alternative method which is theoretically much cleaner and to a large extend free from the problems encountered in current measurements. In detail we discuss the direct determination of the top-quark's running mass from the cross section measurements performed at the Tevatron. (orig.)

  17. Individualism, innovation, and long-run growth.

    Science.gov (United States)

    Gorodnichenko, Yuriy; Roland, Gerard

    2011-12-27

    Countries having a more individualist culture have enjoyed higher long-run growth than countries with a more collectivist culture. Individualist culture attaches social status rewards to personal achievements and thus, provides not only monetary incentives for innovation but also social status rewards, leading to higher rates of innovation and economic growth.

  18. Estimating Stair Running Performance Using Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Lauro V. Ojeda

    2017-11-01

    Full Text Available Stair running, both ascending and descending, is a challenging aerobic exercise that many athletes, recreational runners, and soldiers perform during training. Studying biomechanics of stair running over multiple steps has been limited by the practical challenges presented while using optical-based motion tracking systems. We propose using foot-mounted inertial measurement units (IMUs as a solution as they enable unrestricted motion capture in any environment and without need for external references. In particular, this paper presents methods for estimating foot velocity and trajectory during stair running using foot-mounted IMUs. Computational methods leverage the stationary periods occurring during the stance phase and known stair geometry to estimate foot orientation and trajectory, ultimately used to calculate stride metrics. These calculations, applied to human participant stair running data, reveal performance trends through timing, trajectory, energy, and force stride metrics. We present the results of our analysis of experimental data collected on eleven subjects. Overall, we determine that for either ascending or descending, the stance time is the strongest predictor of speed as shown by its high correlation with stride time.

  19. Numerical Modelling of Wave Run-Up

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2011-01-01

    Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...

  20. Daytime running lights : costs or benefits?

    NARCIS (Netherlands)

    Brouwer, R.F.T.; Janssen, W.H.; Theeuwes, J.; Alferdinck, J.W.A.M.; Duistermaat, M.

    2006-01-01

    The present study deals with the possibility that road users in the vicinity of a vehicle with daytime running lights (DRL) would suffer from a decreased conspicuity because of (he presence of that vehicle. In an experiment the primary effects of DRL on the conspicuity of other road users were

  1. Running coupling constants of the Luttinger liquid

    International Nuclear Information System (INIS)

    Boose, D.; Jacquot, J.L.; Polonyi, J.

    2005-01-01

    We compute the one-loop expressions of two running coupling constants of the Luttinger model. The obtained expressions have a nontrivial momentum dependence with Landau poles. The reason for the discrepancy between our results and those of other studies, which find that the scaling laws are trivial, is explained

  2. Wave run-up on sandbag slopes

    Directory of Open Access Journals (Sweden)

    Thamnoon Rasmeemasmuang

    2014-03-01

    Full Text Available On occasions, sandbag revetments are temporarily applied to armour sandy beaches from erosion. Nevertheless, an empirical formula to determine the wave run -up height on sandbag slopes has not been available heretofore. In this study a wave run-up formula which considers the roughness of slope surfaces is proposed for the case of sandbag slopes. A series of laboratory experiments on the wave run -up on smooth slopes and sandbag slopes were conducted in a regular-wave flume, leading to the finding of empirical parameters for the formula. The proposed empirical formula is applicable to wave steepness ranging from 0.01 to 0.14 and to the thickness of placed sandbags relative to the wave height ranging from 0.17 to 3.0. The study shows that the wave run-up height computed by the formula for the sandbag slopes is 26-40% lower than that computed by the formula for the smooth slopes.

  3. The CDF Run II disk inventory manager

    International Nuclear Information System (INIS)

    Hubbard, Paul; Lammel, Stephan

    2001-01-01

    The Collider Detector at Fermilab (CDF) experiment records and analyses proton-antiproton interactions at a center-of-mass energy of 2 TeV. Run II of the Fermilab Tevatron started in April of this year. The duration of the run is expected to be over two years. One of the main data handling strategies of CDF for Run II is to hide all tape access from the user and to facilitate sharing of data and thus disk space. A disk inventory manager was designed and developed over the past years to keep track of the data on disk, to coordinate user access to the data, and to stage data back from tape to disk as needed. The CDF Run II disk inventory manager consists of a server process, a user and administrator command line interfaces, and a library with the routines of the client API. Data are managed in filesets which are groups of one or more files. The system keeps track of user access to the filesets and attempts to keep frequently accessed data on disk. Data that are not on disk are automatically staged back from tape as needed. For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard

  4. Common Running Overuse Injuries and Prevention

    Directory of Open Access Journals (Sweden)

    Žiga Kozinc

    2017-09-01

    Full Text Available Runners are particularly prone to developing overuse injuries. The most common running-related injuries include medial tibial stress syndrome, Achilles tendinopathy, plantar fasciitis, patellar tendinopathy, iliotibial band syndrome, tibial stress fractures, and patellofemoral pain syndrome. Two of the most significant risk factors appear to be injury history and weekly distance. Several trials have successfully identified biomechanical risk factors for specific injuries, with increased ground reaction forces, excessive foot pronation, hip internal rotation and hip adduction during stance phase being mentioned most often. However, evidence on interventions for lowering injury risk is limited, especially regarding exercise-based interventions. Biofeedback training for lowering ground reaction forces is one of the few methods proven to be effective. It seems that the best way to approach running injury prevention is through individualized treatment. Each athlete should be assessed separately and scanned for risk factors, which should be then addressed with specific exercises. This review provides an overview of most common running-related injuries, with a particular focus on risk factors, and emphasizes the problems encountered in preventing running-related injuries.

  5. The running athlete: Roentgenograms and remedies

    International Nuclear Information System (INIS)

    Pavlov, H.; Torg, J.S.

    1986-01-01

    The authors have put together an atlas of radiographs of almost every conceivable running injury to the foot, ankle, leg, knee, femur, groin, and spine. Text material is limited to legends which describe the figures, and the remedies listed are brief. The text indicates conservative versus surgical treatment and, in some instances, recommends a surgical procedure

  6. The D0 run II trigger system

    International Nuclear Information System (INIS)

    Schwienhorst, Reinhard; Michigan State U.

    2004-01-01

    The D0 detector at the Fermilab Tevatron was upgraded for Run II. This upgrade included improvements to the trigger system in order to be able to handle the increased Tevatron luminosity and higher bunch crossing rates compared to Run I. The D0 Run II trigger is a highly exible system to select events to be written to tape from an initial interaction rate of about 2.5 MHz. This is done in a three-tier pipelined, buffered system. The first tier (level 1) processes fast detector pick-off signals in a hardware/firmware based system to reduce the event rate to about 1. 5kHz. The second tier (level 2) uses information from level 1 and forms simple Physics objects to reduce the rate to about 850 Hz. The third tier (level 3) uses full detector readout and event reconstruction on a filter farm to reduce the rate to 20-30 Hz. The D0 trigger menu contains a wide variety of triggers. While the emphasis is on triggering on generic lepton and jet final states, there are also trigger terms for specific final state signatures. In this document we describe the D0 trigger system as it was implemented and is currently operating in Run II

  7. Run-2 ATLAS Trigger and Detector Performance

    CERN Document Server

    Winklmeier, Frank; The ATLAS collaboration

    2016-01-01

    The 2nd LHC run has started in June 2015 with a pp centre-of-mass collision energy of 13 TeV, and ATLAS has taken first data at this new energy. In this talk the improvements made to the ATLAS experiment during the 2-year shutdown 2013/2014 will be discussed, and first detector and trigger performance results from the Run-2 will be shown. In general, reconstruction algorithms of tracks, e/gamma, muons, taus, jets and flavour tag- ging have been improved for Run-2. The new reconstruction algorithms and their performance measured using the data taken in 2015 at sqrt(s)=13 TeV will be discussed. Reconstruction efficiency, isolation performance, transverse momentum resolution and momentum scales are measured in various regions of the detector and in momentum intervals enlarged with respect to those measured in the Run-1. This presentation will also give an overview of the upgrades to the ATLAS trigger system that have been implemented during the LHC shutdown in order to deal with the increased trigger rates (fact...

  8. A punched-card library of neutron cross-sections and its use in the mechanized preparation of group cross-sections for use in Monte Carlo, Carlson S{sub n} and other multi-group neutronics calculations on high-speed computers

    Energy Technology Data Exchange (ETDEWEB)

    Parker, K [Atomic Weapons Research Establishment, Aldermaston (United Kingdom)

    1962-03-15

    The AWRE punched-card library of neutron cross-sections is described together with associated IBM-7090 programmes which process this data to give group-averaged cross-sections for use in Monte Carlo, Carlson S{sub n} and other multi-group neutronics calculations. The methods developed to deal with both isotropic and anisotropic elastic scattering are described. These include the multi-group transport approximation and the full treatment of anisotropic scattering using the Legendre polynomial moments of the scattering transfer matrix. The principles of group-constant formation are considered and illustrated by describing systems of group constants suitable for fast-reactor calculations. Practical problems such as the empirical adjustment of group constants to reproduce integral results and the collapsing of a many-group set of constants to give a few-group set are discussed. (author) [French] L'auteur decrit le fichier de cartes perforees sur lesquelles on enregistre a l'Atomic Weapons Research Establishment (AWRE) les sections efficaces neutroniques ainsi que les programmes IBM-7090 associes qui sont employes pour le traitement de ces informations, en vue d'obtenir des sections efficaces moyennes par groupe pouvant servir aux calculs de neutroniques a plusieurs groupes, effectues a l'aide des methodes de Monte-Carlo, S{sub n} de Carlson et autres methodes. L'auteur expose ensuite les methodes mises au point roda etudier la diffusion elastique, tant isotrope qu'anisotrope. Elles comprennent l'approximation de transport a plusieurs groupes, ainsi que le traitement complet de la diffusion anisotrope par les moments polynomiaux de Legendre de la matrice de transfert de la diffusion. L'auteur examine les principes de la formation des constantes de groupes; a titre d'illustration, il decrit les systemes de constantes de groupes qui se pretent aux calculs de reacteurs a neutrons rapides. Il expose quelques problemes pratiques, tels que l'ajustement empirique des

  9. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  10. Alternative Implementations of the Monte Carlo Power Method

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    2002-01-01

    We compare nominal efficiencies, i.e., variances in power shapes for equal running time, of different versions of the Monte Carlo (MC) eigenvalue computation. The two main methods considered here are 'conventional' MC and the superhistory method. Within each of these major methods, different variants are available for the main steps of the basic MC algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or they may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional MC, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional MC and, second, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on MC computational efficiency

  11. Monte Carlo simulation of the turbulent transport of airborne contaminants

    International Nuclear Information System (INIS)

    Watson, C.W.; Barr, S.

    1975-09-01

    A generalized, three-dimensional Monte Carlo model and computer code (SPOOR) are described for simulating atmospheric transport and dispersal of small pollutant clouds. A cloud is represented by a large number of particles that we track by statistically sampling simulated wind and turbulence fields. These fields are based on generalized wind data for large-scale flow and turbulent energy spectra for the micro- and mesoscales. The large-scale field can be input from a climatological data base, or by means of real-time analyses, or from a separate, subjectively defined data base. We introduce the micro- and mesoscale wind fluctuations through a power spectral density, to include effects from a broad spectrum of turbulent-energy scales. The role of turbulence is simulated in both meander and dispersal. Complex flow fields and time-dependent diffusion rates are accounted for naturally, and shear effects are simulated automatically in the ensemble of particle trajectories. An important adjunct has been the development of computer-graphics displays. These include two- and three-dimensional (perspective) snapshots and color motion pictures of particle ensembles, plus running displays of differential and integral cloud characteristics. The model's versatility makes it a valuable atmospheric research tool that we can adapt easily into broader, multicomponent systems-analysis codes. Removal, transformation, dry or wet deposition, and resuspension of contaminant particles can be readily included

  12. Recursive Monte Carlo method for deep-penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.; Greenspan, E.

    1980-01-01

    The Recursive Monte Carlo (RMC) method developed for estimating importance function distributions in deep-penetration problems is described. Unique features of the method, including the ability to infer the importance function distribution pertaining to many detectors from, essentially, a single M.C. run and the ability to use the history tape created for a representative region to calculate the importance function in identical regions, are illustrated. The RMC method is applied to the solution of two realistic deep-penetration problems - a concrete shield problem and a Tokamak major penetration problem. It is found that the RMC method can provide the importance function distributions, required for importance sampling, with accuracy that is suitable for an efficient solution of the deep-penetration problems considered. The use of the RMC method improved, by one to three orders of magnitude, the solution efficiency of the two deep-penetration problems considered: a concrete shield problem and a Tokamak major penetration problem. 8 figures, 4 tables

  13. KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR

    Directory of Open Access Journals (Sweden)

    John A. Mercer

    2005-06-01

    Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor

  14. Peak Running Intensity of International Rugby: Implications for Training Prescription.

    Science.gov (United States)

    Delaney, Jace A; Thornton, Heidi R; Pryor, John F; Stewart, Andrew M; Dascombe, Ben J; Duthie, Grant M

    2017-09-01

    To quantify the duration and position-specific peak running intensities of international rugby union for the prescription and monitoring of specific training methodologies. Global positioning systems (GPS) were used to assess the activity profile of 67 elite-level rugby union players from 2 nations across 33 international matches. A moving-average approach was used to identify the peak relative distance (m/min), average acceleration/deceleration (AveAcc; m/s 2 ), and average metabolic power (P met ) for a range of durations (1-10 min). Differences between positions and durations were described using a magnitude-based network. Peak running intensity increased as the length of the moving average decreased. There were likely small to moderate increases in relative distance and AveAcc for outside backs, halfbacks, and loose forwards compared with the tight 5 group across all moving-average durations (effect size [ES] = 0.27-1.00). P met demands were at least likely greater for outside backs and halfbacks than for the tight 5 (ES = 0.86-0.99). Halfbacks demonstrated the greatest relative distance and P met outputs but were similar to outside backs and loose forwards in AveAcc demands. The current study has presented a framework to describe the peak running intensities achieved during international rugby competition by position, which are considerably higher than previously reported whole-period averages. These data provide further knowledge of the peak activity profiles of international rugby competition, and this information can be used to assist coaches and practitioners in adequately preparing athletes for the most demanding periods of play.

  15. Monte Carlo computation in the applied research of nuclear technology

    International Nuclear Information System (INIS)

    Xu Shuyan; Liu Baojie; Li Qin

    2007-01-01

    This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)

  16. LHC Report: Tests of new LHC running modes

    CERN Document Server

    Verena Kain for the LHC team

    2012-01-01

    On 13 September, the LHC collided lead ions with protons for the first time. This outstanding achievement was key preparation for the planned 2013 operation in this mode. Outside of two special physics runs, the LHC has continued productive proton-proton luminosity operation.   Celebrating proton-ion collisions. The first week of September added another 1 fb-1 of integrated luminosity to ATLAS’s and CMS’s proton-proton data set. It was a week of good and steady production mixed with the usual collection of minor equipment faults. The peak performance was slightly degraded at the start of the week but thanks to the work of the teams in the LHC injectors the beam brightness – and thus the LHC peak performance – were restored to previous levels by the weekend. The LHC then switched to new running modes and spectacularly proved its potential as a multi-purpose machine. This is due in large part to the LHC equipment and controls, which have been designed wi...

  17. The efficacy of downhill running as a method to enhance running economy in trained distance runners.

    Science.gov (United States)

    Shaw, Andrew J; Ingham, Stephen A; Folland, Jonathan P

    2018-06-01

    Running downhill, in comparison to running on the flat, appears to involve an exaggerated stretch-shortening cycle (SSC) due to greater impact loads and higher vertical velocity on landing, whilst also incurring a lower metabolic cost. Therefore, downhill running could facilitate higher volumes of training at higher speeds whilst performing an exaggerated SSC, potentially inducing favourable adaptations in running mechanics and running economy (RE). This investigation assessed the efficacy of a supplementary 8-week programme of downhill running as a means of enhancing RE in well-trained distance runners. Nineteen athletes completed supplementary downhill (-5% gradient; n = 10) or flat (n = 9) run training twice a week for 8 weeks within their habitual training. Participants trained at a standardised intensity based on the velocity of lactate turnpoint (vLTP), with training volume increased incrementally between weeks. Changes in energy cost of running (E C ) and vLTP were assessed on both flat and downhill gradients, in addition to maximal oxygen uptake (⩒O 2max). No changes in E C were observed during flat running following downhill (1.22 ± 0.09 vs 1.20 ± 0.07 Kcal kg -1  km -1 , P = .41) or flat run training (1.21 ± 0.13 vs 1.19 ± 0.12 Kcal kg -1  km -1 ). Moreover, no changes in E C during downhill running were observed in either condition (P > .23). vLTP increased following both downhill (16.5 ± 0.7 vs 16.9 ± 0.6 km h -1 , P = .05) and flat run training (16.9 ± 0.7 vs 17.2 ± 1.0 km h -1 , P = .05), though no differences in responses were observed between groups (P = .53). Therefore, a short programme of supplementary downhill run training does not appear to enhance RE in already well-trained individuals.

  18. Influence of treadmill acceleration on actual walk-to-run transition.

    Science.gov (United States)

    Van Caekenberghe, I; Segers, V; De Smet, K; Aerts, P; De Clercq, D

    2010-01-01

    When accelerating continuously, humans spontaneously change from a walking to a running pattern by means of a walk-to-run transition (WRT). Results of previous studies indicate that when higher treadmill accelerations are imposed, higher WRT-speeds can be expected. By studying the kinematics of the WRT at different accelerations, the underlying mechanisms can be unravelled. 19 young, healthy female subjects performed walk-to-run transitions on a constantly accelerating treadmill (0.1, 0.2 and 0.5 m s(-2)). A higher acceleration induced a higher WRT-speed, by effecting the preparation of transition, as well as the actual transition step. Increasing the acceleration caused a higher WRT-speed as a result of a greater step length during the transition step, which was mainly a consequence of a prolonged airborne phase. Besides this effect on the transition step, the direct preparation phase of transition (i.e. the last walking step before transition) appeared to fulfil specific constraints required to execute the transition regardless of the acceleration imposed. This highlights an important role for this step in the debate regarding possible determinants of WRT. In addition spatiotemporal and kinematical data confirmed that WRT remains a discontinuous change of gait pattern in all accelerations imposed. It is concluded that the walk-to-run transition is a discontinuous switch from walking to running which depends on the magnitude of treadmill belt acceleration. Copyright 2009 Elsevier B.V. All rights reserved.

  19. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. KENO IV: an improved Monte Carlo criticality program

    International Nuclear Information System (INIS)

    Petrie, L.M.; Cross, N.F.

    1975-11-01

    KENO IV is a multigroup Monte Carlo criticality program written for the IBM 360 computers. It executes rapidly and is flexibly dimensioned so the allowed size of a problem (i.e., the number of energy groups, number of geometry cards, etc., are arbitrary) is limited only by the total data storage required. The input data, with the exception of cross sections, fission spectra and albedos, may be entered in free form. The geometry input is quite simple to prepare and complicated three-dimensional systems can often be described with a minimum of effort. The results calculated by KENO IV include k-effective, lifetime and generation time, energy-dependent leakages and absorptions, energy- and region-dependent fluxes and region-dependent fission densities. Criticality searches can be made on unit dimensions or on the number of units in an array. A summary of the theory utilized by KENO IV, a section describing the logical program flow, a compilation of the error messages printed by the code and a comprehensive data guide for preparing input to the code are presented. 14 references