WorldWideScience

Sample records for carlo code system

  1. Morse Monte Carlo Radiation Transport Code System

    Energy Technology Data Exchange (ETDEWEB)

    Emmett, M.B.

    1975-02-01

    The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)

  2. MORSE Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  3. A Parallel Monte Carlo Code for Simulating Collisional N-body Systems

    CERN Document Server

    Pattabiraman, Bharath; Liao, Wei-Keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A

    2012-01-01

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N~10^7 particles. Our code is based on the the H\\'enon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures, and the introduction of a parallel random number generation scheme, as well as a parallel sorting algorithm, required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. The implementation uses the Message Passing Interface (MPI) library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functi...

  4. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.

  5. Space applications of the MITS electron-photon Monte Carlo transport code system

    Energy Technology Data Exchange (ETDEWEB)

    Kensek, R.P.; Lorence, L.J.; Halbleib, J.A. [Sandia National Labs., Albuquerque, NM (United States); Morel, J.E. [Los Alamos National Lab., NM (United States)

    1996-07-01

    The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.

  6. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    Science.gov (United States)

    Merheb, C.; Petegnief, Y.; Talbot, J. N.

    2007-02-01

    Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed

  7. A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J.O. [ed.

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  8. A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J.O. (ed.)

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  9. A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System

    Energy Technology Data Exchange (ETDEWEB)

    C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler

    1998-10-01

    The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.

  10. Coupling MCNP-DSP and LAHET Monte Carlo codes for designing subcriticality monitors for accelerator-driven systems

    Energy Technology Data Exchange (ETDEWEB)

    Valentine, T.; Perez, R. [Oak Ridge National Lab., TN (United States); Rugama, Y.; Munoz-Cobo, J.L. [Poly. Tech. Univ. of Valencia (Spain). Chemical and Nuclear Engineering Dept.

    2001-07-01

    The design of reactivity monitoring systems for accelerator-driven systems must be investigated to ensure that such systems remain subcritical during operation. The Monte Carlo codes LAHET and MCNP-DSP were combined together to facilitate the design of reactivity monitoring systems. The coupling of LAHET and MCNP-DSP provides a tool that can be used to simulate a variety of subcritical measurements such as the pulsed neutron, Rossi-{alpha}, or noise analysis measurements. (orig.)

  11. Coupling MCNP-DSP and LAHET Monte Carlo Codes for Designing Subcriticality Monitors for Accelerator-Driven Systems

    Energy Technology Data Exchange (ETDEWEB)

    Valentine, T.E.; Rugama, Y. Munoz-Cobos, J.; Perez, R.

    2000-10-23

    The design of reactivity monitoring systems for accelerator-driven systems must be investigated to ensure that such systems remain subcritical during operation. The Monte Carlo codes LAHET and MCNP-DSP were combined together to facilitate the design of reactivity monitoring systems. The coupling of LAHET and MCNP-DSP provides a tool that can be used to simulate a variety of subcritical measurements such as the pulsed neutron, Rossi-{alpha}, or noise analysis measurements.

  12. Quality control of the treatment planning systems dose calculations in external radiation therapy using the Penelope Monte Carlo code; Controle qualite des systemes de planification dosimetrique des traitements en radiotherapie externe au moyen du code Monte-Carlo Penelope

    Energy Technology Data Exchange (ETDEWEB)

    Blazy-Aubignac, L

    2007-09-15

    The treatment planning systems (T.P.S.) occupy a key position in the radiotherapy service: they realize the projected calculation of the dose distribution and the treatment duration. Traditionally, the quality control of the calculated distribution doses relies on their comparisons with dose distributions measured under the device of treatment. This thesis proposes to substitute these dosimetry measures to the profile of reference dosimetry calculations got by the Penelope Monte-Carlo code. The Monte-Carlo simulations give a broad choice of test configurations and allow to envisage a quality control of dosimetry aspects of T.P.S. without monopolizing the treatment devices. This quality control, based on the Monte-Carlo simulations has been tested on a clinical T.P.S. and has allowed to simplify the quality procedures of the T.P.S.. This quality control, in depth, more precise and simpler to implement could be generalized to every center of radiotherapy. (N.C.)

  13. Blind Decoding of Multiple Description Codes over OFDM Systems via Sequential Monte Carlo

    Directory of Open Access Journals (Sweden)

    Guo Dong

    2005-01-01

    Full Text Available We consider the problem of transmitting a continuous source through an OFDM system. Multiple description scalar quantization (MDSQ is applied to the source signal, resulting in two correlated source descriptions. The two descriptions are then OFDM modulated and transmitted through two parallel frequency-selective fading channels. At the receiver, a blind turbo receiver is developed for joint OFDM demodulation and MDSQ decoding. Transformation of the extrinsic information of the two descriptions are exchanged between each other to improve system performance. A blind soft-input soft-output OFDM detector is developed, which is based on the techniques of importance sampling and resampling. Such a detector is capable of exchanging the so-called extrinsic information with the other component in the above turbo receiver, and successively improving the overall receiver performance. Finally, we also treat channel-coded systems, and a novel blind turbo receiver is developed for joint demodulation, channel decoding, and MDSQ source decoding.

  14. The application of the Monte-Carlo neutron transport code MCNP to a small "nuclear battery" system

    OpenAIRE

    Puigdellívol Sadurní, Roger

    2009-01-01

    The project consist in calculate the keff to a small nuclear battery. The code Monte- Carlo neutron transport code MCNP is used to calculate the keff. The calculations are done at the beginning of life to know the capacity of the core becomes critical in different conditions. These conditions are the study parameters that determine the criticality of the core. These parameters are the uranium enrichment, the coated particles (TRISO) packing factor and the size of the core. More...

  15. Monte Carlo simulation code modernization

    CERN Document Server

    CERN. Geneva

    2015-01-01

    The continual development of sophisticated transport simulation algorithms allows increasingly accurate description of the effect of the passage of particles through matter. This modelling capability finds applications in a large spectrum of fields from medicine to astrophysics, and of course HEP. These new capabilities however come at the cost of a greater computational intensity of the new models, which has the effect of increasing the demands of computing resources. This is particularly true for HEP, where the demand for more simulation are driven by the need of both more accuracy and more precision, i.e. better models and more events. Usually HEP has relied on the "Moore's law" evolution, but since almost ten years the increase in clock speed has withered and computing capacity comes in the form of hardware architectures of many-core or accelerated processors. To harness these opportunities we need to adapt our code to concurrent programming models taking advantages of both SIMD and SIMT architectures. Th...

  16. Comparative study among simulations of an internal monitoring system using different Monte Carlo codes; Estudo comparativo entre simulacoes de um sistema de monitoracao ocupacional interna utilizando diferentes codigos de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, T.C.F.; Bastos, F.M.; Figueiredo, M.T.T.; Souza, L.S.; Guimaraes, M.C.; Silva, C.R.E.; Mello, O.A.; Castelo e Silva, L.A.; Paixao, L., E-mail: tcff01@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Benavente, J.A.; Paiva, F.G. [Universidade Federal de Minas Gerais (PCTN/UFMG), Belo Horizonte, MG (Brazil). Curso de Pos-Graduacao em Ciencias e Tecnicas Nucleares

    2015-07-01

    Computational Monte Carlo (MC) codes have been used for simulation of nuclear installations mainly for internal monitoring of workers, the well known as Whole Body Counters (WBC). The main goal of this project was the modeling and simulation of the counting efficiency (CE) of a WBC system using three different MC codes: MCNPX, EGSnrc and VMC in-vivo. The simulations were performed for three different groups of analysts. The results shown differences between the three codes, as well as in the results obtained by the same code and modeled by different analysts. Moreover, all the results were also compared to the experimental results obtained in laboratory for meaning of validation and final comparison. In conclusion, it was possible to detect the influence on the results when the system is modeled by different analysts using the same MC code and in which MC code the results were best suited, when comparing to the experimental data result. (author)

  17. MCOR - Monte Carlo depletion code for reference LWR calculations

    Energy Technology Data Exchange (ETDEWEB)

    Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)

    2011-04-15

    Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally

  18. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    Energy Technology Data Exchange (ETDEWEB)

    WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  19. Dose calculations for a simplified Mammosite system with the Monte Carlo Penelope and MCNPX simulation codes; Calculos de dosis para un sistema Mammosite simplificado con los codigos de simulacion Monte Carlo PENELOPE y MCNPX

    Energy Technology Data Exchange (ETDEWEB)

    Rojas C, E.L.; Varon T, C.F.; Pedraza N, R. [ININ, 52750 La Marquesa, Estado de Mexico (Mexico)]. e-mail: elrc@nuclear.inin.mx

    2007-07-01

    The treatment of the breast cancer at early stages is of vital importance. For that, most of the investigations are dedicated to the early detection of the suffering and their treatment. As investigation consequence and clinical practice, in 2002 it was developed in U.S.A. an irradiation system of high dose rate known as Mammosite. In this work we carry out dose calculations for a simplified Mammosite system with the Monte Carlo Penelope simulation code and MCNPX, varying the concentration of the contrast material that it is used in the one. (Author)

  20. A semianalytic Monte Carlo code for modelling LIDAR measurements

    Science.gov (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  1. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    Science.gov (United States)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, i.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  2. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    CERN Document Server

    Lomax, O

    2016-01-01

    We present a code for generating synthetic SEDs and intensity maps from Smoothed Particle Hydrodynamics simulation snapshots. The code is based on the Lucy (1999) Monte Carlo Radiative Transfer method, i.e. it follows discrete luminosity packets, emitted from external and/or embedded sources, as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The density is not mapped onto a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Second, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  3. Applications guide to the MORSE Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1985-08-01

    A practical guide for the implementation of the MORESE-CG Monte Carlo radiation transport computer code system is presented. The various versions of the MORSE code are compared and contrasted, and the many references dealing explicitly with the MORSE-CG code are reviewed. The treatment of angular scattering is discussed, and procedures for obtaining increased differentiality of results in terms of reaction types and nuclides from a multigroup Monte Carlo code are explained in terms of cross-section and geometry data manipulation. Examples of standard cross-section data input and output are shown. Many other features of the code system are also reviewed, including (1) the concept of primary and secondary particles, (2) fission neutron generation, (3) albedo data capability, (4) DOMINO coupling, (5) history file use for post-processing of results, (6) adjoint mode operation, (7) variance reduction, and (8) input/output. In addition, examples of the combinatorial geometry are given, and the new array of arrays geometry feature (MARS) and its three-dimensional plotting code (JUNEBUG) are presented. Realistic examples of user routines for source, estimation, path-length stretching, and cross-section data manipulation are given. A deatiled explanation of the coupling between the random walk and estimation procedure is given in terms of both code parameters and physical analogies. The operation of the code in the adjoint mode is covered extensively. The basic concepts of adjoint theory and dimensionality are discussed and examples of adjoint source and estimator user routines are given for all common situations. Adjoint source normalization is explained, a few sample problems are given, and the concept of obtaining forward differential results from adjoint calculations is covered. Finally, the documentation of the standard MORSE-CG sample problem package is reviewed and on-going and future work is discussed.

  4. Adaptation of penelope Monte Carlo code system to the absorbed dose metrology: characterization of high energy photon beams and calculations of reference dosimeter correction factors; Adaptation du code Monte Carlo penelope pour la metrologie de la dose absorbee: caracterisation des faisceaux de photons X de haute energie et calcul de facteurs de correction de dosimetres de reference

    Energy Technology Data Exchange (ETDEWEB)

    Mazurier, J

    1999-05-28

    This thesis has been performed in the framework of national reference setting-up for absorbed dose in water and high energy photon beam provided with the SATURNE-43 medical accelerator of the BNM-LPRI (acronym for National Bureau of Metrology and Primary standard laboratory of ionising radiation). The aim of this work has been to develop and validate different user codes, based on PENELOPE Monte Carlo code system, to determine the photon beam characteristics and calculate the correction factors of reference dosimeters such as Fricke dosimeters and graphite calorimeter. In the first step, the developed user codes have permitted the influence study of different components constituting the irradiation head. Variance reduction techniques have been used to reduce the calculation time. The phase space has been calculated for 6, 12 and 25 MV at the output surface level of the accelerator head, then used for calculating energy spectra and dose distributions in the reference water phantom. Results obtained have been compared with experimental measurements. The second step has been devoted to develop an user code allowing calculation correction factors associated with both BNM-LPRI's graphite and Fricke dosimeters thanks to a correlated sampling method starting with energy spectra obtained in the first step. Then the calculated correction factors have been compared with experimental and calculated results obtained with the Monte Carlo EGS4 code system. The good agreement, between experimental and calculated results, leads to validate simulations performed with the PENELOPE code system. (author)

  5. Proton therapy Monte Carlo SRNA-VOX code

    Directory of Open Access Journals (Sweden)

    Ilić Radovan D.

    2012-01-01

    Full Text Available The most powerful feature of the Monte Carlo method is the possibility of simulating all individual particle interactions in three dimensions and performing numerical experiments with a preset error. These facts were the motivation behind the development of a general-purpose Monte Carlo SRNA program for proton transport simulation in technical systems described by standard geometrical forms (plane, sphere, cone, cylinder, cube. Some of the possible applications of the SRNA program are: (a a general code for proton transport modeling, (b design of accelerator-driven systems, (c simulation of proton scattering and degrading shapes and composition, (d research on proton detectors; and (e radiation protection at accelerator installations. This wide range of possible applications of the program demands the development of various versions of SRNA-VOX codes for proton transport modeling in voxelized geometries and has, finally, resulted in the ISTAR package for the calculation of deposited energy distribution in patients on the basis of CT data in radiotherapy. All of the said codes are capable of using 3-D proton sources with an arbitrary energy spectrum in an interval of 100 keV to 250 MeV.

  6. Computed radiography simulation using the Monte Carlo code MCNPX

    Energy Technology Data Exchange (ETDEWEB)

    Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)

    2010-09-15

    Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.

  7. Parallelization of a Monte Carlo particle transport simulation code

    Science.gov (United States)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  8. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-30

    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  9. Criticality benchmarks validation of the Monte Carlo code TRIPOLI-2

    Energy Technology Data Exchange (ETDEWEB)

    Maubert, L. (Commissariat a l' Energie Atomique, Inst. de Protection et de Surete Nucleaire, Service d' Etudes de Criticite, 92 - Fontenay-aux-Roses (France)); Nouri, A. (Commissariat a l' Energie Atomique, Inst. de Protection et de Surete Nucleaire, Service d' Etudes de Criticite, 92 - Fontenay-aux-Roses (France)); Vergnaud, T. (Commissariat a l' Energie Atomique, Direction des Reacteurs Nucleaires, Service d' Etudes des Reacteurs et de Mathematique Appliquees, 91 - Gif-sur-Yvette (France))

    1993-04-01

    The three-dimensional energy pointwise Monte-Carlo code TRIPOLI-2 includes metallic spheres of uranium and plutonium, nitrate plutonium solutions, square and triangular pitch assemblies of uranium oxide. Results show good agreements between experiments and calculations, and avoid a part of the code and its ENDF-B4 library validation. (orig./DG)

  10. TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D.E.

    1997-11-22

    TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.

  11. Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations

    Energy Technology Data Exchange (ETDEWEB)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [Nuclear Research Centre ' ' Kurchatov Institute' ' , Moscow (Russian Federation)

    2015-09-15

    A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.

  12. TRIPOLI-4: Monte Carlo transport code functionalities and applications; TRIPOLI-4: code de transport Monte Carlo fonctionnalites et applications

    Energy Technology Data Exchange (ETDEWEB)

    Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B. [CEA Saclay, Dir. de l' Energie Nucleaire (DEN), Service d' Etudes de Reacteurs et de Modelisation Avancee, 91 - Gif sur Yvette (France)

    2003-07-01

    Tripoli-4 is a three dimensional calculations code using the Monte Carlo method to simulate the transport of neutrons, photons, electrons and positrons. This code is used in four application fields: the protection studies, the criticality studies, the core studies and the instrumentation studies. Geometry, cross sections, description of sources, principle. (N.C.)

  13. Parallelization of Monte Carlo codes MVP/GMVP

    Energy Technology Data Exchange (ETDEWEB)

    Nagaya, Yasunobu; Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Sasaki, Makoto

    1998-03-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of the parallel processing platforms. The platforms reported are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel Paragon and a distributed-memory scalar-parallel computer Hitachi SR2201. As mentioned generally, ideal speedup could be obtained for large-scale problems but parallelization efficiency got worse as the batch size per a processing element (PE) was smaller. (author)

  14. A Monte Carlo code for ion beam therapy

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    Initially developed for applications in detector and accelerator physics, the modern Fluka Monte Carlo code is now used in many different areas of nuclear science. Over the last 25 years, the code has evolved to include new features, such as ion beam simulations. Given the growing use of these beams in cancer treatment, Fluka simulations are being used to design treatment plans in several hadron-therapy centres in Europe.   Fluka calculates the dose distribution for a patient treated at CNAO with proton beams. The colour-bar displays the normalized dose values. Fluka is a Monte Carlo code that very accurately simulates electromagnetic and nuclear interactions in matter. In the 1990s, in collaboration with NASA, the code was developed to predict potential radiation hazards received by space crews during possible future trips to Mars. Over the years, it has become the standard tool to investigate beam-machine interactions, radiation damage and radioprotection issues in the CERN accelerator com...

  15. MCDB Monte Carlo dosimetry code system and its applications%MCDB蒙特卡罗剂量计算系统及应用

    Institute of Scientific and Technical Information of China (English)

    邓力; 李刚; 陈朝斌; 叶涛

    2012-01-01

    硼中子俘获治疗(BNCT)蒙特卡罗剂量计算软件系统MCDB(Monte Carlo dosimetry code for brain)已经开发成功.它包括医学前处理、剂量计算和后处理.前处理把CT、MRI图像数据自动转化为剂量计算的输入文件,剂量计算基于蒙特卡罗(MC)方法,后处理是确定照射方向和照射时间.为了提高剂量计算的精度和缩短计算时间,MCDB发展了针对体素模型的快速粒子径迹算法,构造材料矩阵和计数矩阵,程序实现了MPI并行化.通过一个病例,MCDB完成了从CT、MRI提取数据、剂量计算和后处理的全过程.计算取得了与MCNP程序一致的结果,串行计算速度较MCNP提高3倍以上,并行效率可以达到90%,完全满足临床对计算精度和计算时间的要求.%MCDB is developed for boron neutron capture therapy ( BNCT). This system consists of a medical pre-processor, a dose computation and a post-processor. MCDB automatically produces the input file from CT and MRI image data. In Monte Carlo dose calculation, several accelerated measures, such as the fast track technique , mesh tally matrix and material matrix, are developed. In this paper, we proposed a real model simulated by MCNP and MCDB, respectively. The almost same results as MCNP are achieved. MCDB is faster in computational speed than MCNP.

  16. Antiproton annihilation physics annihilation physics in the Monte Carlo particle transport code particle transport code SHIELD-HIT12A

    DEFF Research Database (Denmark)

    Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael

    2015-01-01

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data...

  17. Coupling an analytical description of anti-scatter grids with simulation software of radiographic systems using Monte Carlo code; Couplage d'une methode de description analytique de grilles anti diffusantes avec un logiciel de simulation de systemes radiographiques base sur un code Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M.; Tabary, J

    2004-07-01

    The use of focused anti-scatter grids on digital radiographic systems with two-dimensional detectors produces acquisitions with a decreased scatter to primary ratio and thus improved contrast and resolution. Simulation software is of great interest in optimizing grid configuration according to a specific application. Classical simulators are based on complete detailed geometric descriptions of the grid. They are accurate but very time consuming since they use Monte Carlo code to simulate scatter within the high-frequency grids. We propose a new practical method which couples an analytical simulation of the grid interaction with a radiographic system simulation program. First, a two dimensional matrix of probability depending on the grid is created offline, in which the first dimension represents the angle of impact with respect to the normal to the grid lines and the other the energy of the photon. This matrix of probability is then used by the Monte Carlo simulation software in order to provide the final scattered flux image. To evaluate the gain of CPU time, we define the increasing factor as the increase of CPU time of the simulation with as opposed to without the grid. Increasing factors were calculated with the new model and with classical methods representing the grid with its CAD model as part of the object. With the new method, increasing factors are shorter by one to two orders of magnitude compared with the second one. These results were obtained with a difference in calculated scatter of less than five percent between the new and the classical method. (authors)

  18. TRIPOLI-3: a neutron/photon Monte Carlo transport code

    Energy Technology Data Exchange (ETDEWEB)

    Nimal, J.C.; Vergnaud, T. [Commissariat a l' Energie Atomique, Gif-sur-Yvette (France). Service d' Etudes de Reacteurs et de Mathematiques Appliquees

    2001-07-01

    The present version of TRIPOLI-3 solves the transport equation for coupled neutron and gamma ray problems in three dimensional geometries by using the Monte Carlo method. This code is devoted both to shielding and criticality problems. The most important feature for particle transport equation solving is the fine treatment of the physical phenomena and sophisticated biasing technics useful for deep penetrations. The code is used either for shielding design studies or for reference and benchmark to validate cross sections. Neutronic studies are essentially cell or small core calculations and criticality problems. TRIPOLI-3 has been used as reference method, for example, for resonance self shielding qualification. (orig.)

  19. Geometric Templates for Improved Tracking Performance in Monte Carlo Codes

    Science.gov (United States)

    Nease, Brian R.; Millman, David L.; Griesheimer, David P.; Gill, Daniel F.

    2014-06-01

    One of the most fundamental parts of a Monte Carlo code is its geometry kernel. This kernel not only affects particle tracking (i.e., run-time performance), but also shapes how users will input models and collect results for later analyses. A new framework based on geometric templates is proposed that optimizes performance (in terms of tracking speed and memory usage) and simplifies user input for large scale models. While some aspects of this approach currently exist in different Monte Carlo codes, the optimization aspect has not been investigated or applied. If Monte Carlo codes are to be realistically used for full core analysis and design, this type of optimization will be necessary. This paper describes the new approach and the implementation of two template types in MC21: a repeated ellipse template and a box template. Several different models are tested to highlight the performance gains that can be achieved using these templates. Though the exact gains are naturally problem dependent, results show that runtime and memory usage can be significantly reduced when using templates, even as problems reach realistic model sizes.

  20. Longitudinal development of extensive air showers: hybrid code SENECA and full Monte Carlo

    CERN Document Server

    Ortiz, J A; De Souza, V; Ortiz, Jeferson A.; Tanco, Gustavo Medina

    2004-01-01

    New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air hower modeling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tridimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower devel...

  1. Longitudinal development of extensive air showers: Hybrid code SENECA and full Monte Carlo

    Science.gov (United States)

    Ortiz, Jeferson A.; Medina-Tanco, Gustavo; de Souza, Vitor

    2005-06-01

    New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air shower modelling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tri-dimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower development: the first step predicts the large fluctuations in the very first particle interactions at high energies while the second step provides a well detailed lateral distribution simulation of the final stages of the air shower. Both Monte Carlo simulation steps are connected by a cascade equation system which reproduces correctly the hadronic and electromagnetic longitudinal profile. We study the influence of this approach on the main longitudinal characteristics of proton, iron nucleus and gamma induced air showers and compare the predictions of the well known CORSIKA code using the QGSJET hadronic interaction model.

  2. Validation of the Monte Carlo code MCNP-DSP

    Energy Technology Data Exchange (ETDEWEB)

    Valentine, T.E.; Mihalczo, J.T. [Oak Ridge National Lab., TN (United States)

    1996-09-12

    Several calculations were performed to validate MCNP-DSP, which is a Monte Carlo code that calculates all the time and frequency analysis parameters associated with the {sup 252}Cf-source-driven time and frequency analysis method. The frequency analysis parameters are obtained in two ways: directly by Fourier transforming the detector responses and indirectly by taking the Fourier transform of the autocorrelation and cross-correlation functions. The direct and indirect Fourier processing methods were shown to produce the same frequency spectra and convergence, thus verifying the way to obtain the frequency analysis parameters from the time sequences of detector pulses. (Author).

  3. Monte-Carlo code calculation of 3D reactor core model with usage of burnt fuel isotopic compositions, obtained by engineering codes

    Energy Technology Data Exchange (ETDEWEB)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2016-09-15

    A burn-up calculation of large systems by Monte-Carlo code (MCU) is complex process and it requires large computational costs. Previously prepared isotopic compositions are proposed to be used for the Monte-Carlo code calculations of different system states with burnt fuel. Isotopic compositions are calculated by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by the engineering codes (TVS-M, BIPR-7A and PERMAK-A). The multiplication factors and power distributions of FAs from a 3-D reactor core are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The separate conditions of the burnt core are observed. The results of MCU calculations were compared with those that were obtained by engineering codes.

  4. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  5. Validation of the Monte Carlo criticality program KENO IV and the Hansen-Roach sixteen-energy-group-cross sections for high-assay uranium systems. [KENO IV criticality code

    Energy Technology Data Exchange (ETDEWEB)

    Handley, G. R.; Masters, L. C.; Stachowiak, R. V.

    1981-04-10

    Validation of the Monte Carlo criticality code, KENO IV, and the Hansen-Roach sixteen-energy-group cross sections was accomplished by calculating the effective neutron multiplication constant, k/sub eff/, of 29 experimentally critical assemblies which had uranium enrichments of 92.6% or higher in the uranium-235 isotope. The experiments were chosen so that a large variety of geometries and of neutron energy spectra were covered. Problems, calculating the k/sub eff/ of systems with high-uranium-concentration uranyl nitrate solution that were minimally reflected or unreflected, resulted in the separate examination of five cases.

  6. Comparative Criticality Analysis of Two Monte Carlo Codes on Centrifugal Atomizer: MCNPS and SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Kang, H-S; Jang, M-S; Kim, S-R [NESS, Daejeon (Korea, Republic of); Park, J-M; Kim, K-N [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    There are two well-known Monte Carlo codes for criticality analysis, MCNP5 and SCALE. MCNP5 is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical system as a main analysis code. SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. SCALE was conceived and funded by US NRC to perform standardized computer analysis for licensing evaluation and is used widely in the world. We performed a validation test of MCNP5 and a comparative analysis of Monte Carlo codes, MCNP5 and SCALE, in terms of the critical analysis of centrifugal atomizer. In the criticality analysis using MCNP5 code, we obtained the statistically reliable results by using a large number of source histories per cycle and performing of uncertainty analysis.

  7. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  8. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.

    Science.gov (United States)

    Jabbari, Keyvan; Seuntjens, Jan

    2014-07-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.

  9. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX

    Directory of Open Access Journals (Sweden)

    Keyvan Jabbari

    2014-01-01

    Full Text Available An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue. This code can transport protons in wide range of energies (up to 200 MeV for proton. The validity of the fast Monte Carlo (MC code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10% near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10 6 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.

  10. Modification of codes NUALGAM and BREMRAD. Volume 3: Statistical considerations of the Monte Carlo method

    Science.gov (United States)

    Firstenberg, H.

    1971-01-01

    The statistics are considered of the Monte Carlo method relative to the interpretation of the NUGAM2 and NUGAM3 computer code results. A numerical experiment using the NUGAM2 code is presented and the results are statistically interpreted.

  11. The Monte Carlo code MCSHAPE: Main features and recent developments

    Energy Technology Data Exchange (ETDEWEB)

    Scot, Viviana, E-mail: viviana.scot@unibo.it; Fernandez, Jorge E.

    2015-06-01

    MCSHAPE is a general purpose Monte Carlo code developed at the University of Bologna to simulate the diffusion of X- and gamma-ray photons with the special feature of describing the full evolution of the photon polarization state along the interactions with the target. The prevailing photon–matter interactions in the energy range 1–1000 keV, Compton and Rayleigh scattering and photoelectric effect, are considered. All the parameters that characterize the photon transport can be suitably defined: (i) the source intensity, (ii) its full polarization state as a function of energy, (iii) the number of collisions, and (iv) the energy interval and resolution of the simulation. It is possible to visualize the results for selected groups of interactions. MCSHAPE simulates the propagation in heterogeneous media of polarized photons (from synchrotron sources) or of partially polarized sources (from X-ray tubes). In this paper, the main features of MCSHAPE are illustrated with some examples and a comparison with experimental data. - Highlights: • MCSHAPE is an MC code for the simulation of the diffusion of photons in the matter. • It includes the proper description of the evolution of the photon polarization state. • The polarization state is described by means of the Stokes vector, I, Q, U, V. • MCSHAPE includes the computation of the detector influence in the measured spectrum. • MCSHAPE features are illustrated with examples and comparison with experiments.

  12. Recent developments in the Los Alamos radiation transport code system

    Energy Technology Data Exchange (ETDEWEB)

    Forster, R.A.; Parsons, K. [Los Alamos National Lab., NM (United States)

    1997-06-01

    A brief progress report on updates to the Los Alamos Radiation Transport Code System (LARTCS) for solving criticality and fixed-source problems is provided. LARTCS integrates the Diffusion Accelerated Neutral Transport (DANT) discrete ordinates codes with the Monte Carlo N-Particle (MCNP) code. The LARCTS code is being developed with a graphical user interface for problem setup and analysis. Progress in the DANT system for criticality applications include a two-dimensional module which can be linked to a mesh-generation code and a faster iteration scheme. Updates to MCNP Version 4A allow statistical checks of calculated Monte Carlo results.

  13. Dose Calculations for Lung Inhomogeneity in High-Energy Photon Beams and Small Beamlets: A Comparison between XiO and TiGRT Treatment Planning Systems and MCNPX Monte Carlo Code

    Directory of Open Access Journals (Sweden)

    Asghar Mesbahi

    2015-09-01

    Full Text Available Introduction Radiotherapy with small fields is used widely in newly developed techniques. Additionally, dose calculation accuracy of treatment planning systems in small fields plays a crucial role in treatment outcome. In the present study, dose calculation accuracy of two commercial treatment planning systems was evaluated against Monte Carlo method. Materials and Methods Siemens Once or linear accelerator was simulated, using MCNPX Monte Carlo code, according to manufacturer’s instructions. Three analytical algorithms for dose calculation including full scatter convolution (FSC in TiGRT, along with convolution and superposition in XiO system were evaluated for a small solid liver tumor. This solid tumor with a diameter of 1.8 cm was evaluated in a thorax phantom, and calculations were performed for different field sizes (1×1, 2×2, 3×3 and4×4 cm2. The results obtained in these treatment planning systems were compared with calculations by MC method (regarded as the most reliable method. Results For FSC and convolution algorithm, comparison with MC calculations indicated dose overestimations of up to 120%and 25% inside the lung and tumor, respectively in 1×1 cm2field size, using an 18 MV photon beam. Regarding superposition, a close agreement was seen with MC simulation in all studied field sizes. Conclusion The obtained results showed that FSC and convolution algorithm significantly overestimated doses of the lung and solid tumor; therefore, significant errors could arise in treatment plans of lung region, thus affecting the treatment outcomes. Therefore, use of MC-based methods and super position is recommended for lung treatments, using small fields and beamlets.

  14. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  15. OpenMC: A State-of-the-Art Monte Carlo Code for Research and Development

    Science.gov (United States)

    Romano, Paul K.; Horelik, Nicholas E.; Herman, Bryan R.; Nelson, Adam G.; Forget, Benoit; Smith, Kord

    2014-06-01

    This paper gives an overview of OpenMC, an open source Monte Carlo particle transport code recently developed at the Massachusetts Institute of Technology. OpenMC uses continuous-energy cross sections and a constructive solid geometry representation, enabling high-fidelity modeling of nuclear reactors and other systems. Modern, portable input/output file formats are used in OpenMC: XML for input, and HDF5 for output. High performance parallel algorithms in OpenMC have demonstrated near-linear scaling to over 100,000 processors on modern supercomputers. Other topics discussed in this paper include plotting, CMFD acceleration, variance reduction, eigenvalue calculations, and software development processes.

  16. Monte Carlo simulation of nuclear energy study (II). Annual report on Nuclear Code Evaluation Committee

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-01-01

    In the report, research results discussed in 1999 fiscal year at Nuclear Code Evaluation Committee of Nuclear Code Research Committee were summarized. Present status of Monte Carlo simulation on nuclear energy study was described. Especially, besides of criticality, shielding and core analyses, present status of applications to risk and radiation damage analyses, high energy transport and nuclear theory calculations of Monte Carlo Method was described. The 18 papers are indexed individually. (J.P.N.)

  17. Coded aperture coherent scatter imaging for breast cancer detection: a Monte Carlo evaluation

    Science.gov (United States)

    Lakshmanan, Manu N.; Morris, Robert E.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-03-01

    It is known that conventional x-ray imaging provides a maximum contrast between cancerous and healthy fibroglandular breast tissues of 3% based on their linear x-ray attenuation coefficients at 17.5 keV, whereas coherent scatter signal provides a maximum contrast of 19% based on their differential coherent scatter cross sections. Therefore in order to exploit this potential contrast, we seek to evaluate the performance of a coded- aperture coherent scatter imaging system for breast cancer detection and investigate its accuracy using Monte Carlo simulations. In the simulations we modeled our experimental system, which consists of a raster-scanned pencil beam of x-rays, a bismuth-tin coded aperture mask comprised of a repeating slit pattern with 2-mm periodicity, and a linear-array of 128 detector pixels with 6.5-keV energy resolution. The breast tissue that was scanned comprised a 3-cm sample taken from a patient-based XCAT breast phantom containing a tomosynthesis- based realistic simulated lesion. The differential coherent scatter cross section was reconstructed at each pixel in the image using an iterative reconstruction algorithm. Each pixel in the reconstructed image was then classified as being either air or the type of breast tissue with which its normalized reconstructed differential coherent scatter cross section had the highest correlation coefficient. Comparison of the final tissue classification results with the ground truth image showed that the coded aperture imaging technique has a cancerous pixel detection sensitivity (correct identification of cancerous pixels), specificity (correctly ruling out healthy pixels as not being cancer) and accuracy of 92.4%, 91.9% and 92.0%, respectively. Our Monte Carlo evaluation of our experimental coded aperture coherent scatter imaging system shows that it is able to exploit the greater contrast available from coherently scattered x-rays to increase the accuracy of detecting cancerous regions within the breast.

  18. A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification

    Science.gov (United States)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.

  19. Data libraries as a collaborative tool across Monte Carlo codes

    CERN Document Server

    Augelli, Mauro; Han, Mincheol; Hauf, Steffen; Kim, Chan-Hyeung; Kuster, Markus; Pia, Maria Grazia; Quintieri, Lina; Saracco, Paolo; Seo, Hee; Sudhakar, Manju; Eidenspointner, Georg; Zoglauer, Andreas

    2010-01-01

    The role of data libraries in Monte Carlo simulation is discussed. A number of data libraries currently in preparation are reviewed; their data are critically examined with respect to the state-of-the-art in the respective fields. Extensive tests with respect to experimental data have been performed for the validation of their content.

  20. TRIPOLI: a general Monte Carlo code, present state and future prospects. [Neutron and gamma ray transport

    Energy Technology Data Exchange (ETDEWEB)

    Nimal, J.C.; Vergnaud, T. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))

    1990-01-01

    This paper describes the most important features of the Monte Carlo code TRIPOLI-2. This code solves the Boltzmann equation in three-dimensional geometries for coupled neutron and gamma rays problems. A particular emphasis is devoted to the biasing techniques, which are very important for deep penetration. Future developments in TRIPOLI are described in the conclusion. (author).

  1. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  2. FREYA-a new Monte Carlo code for improved modeling of fission chains

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C A; Randrup, J; Vogt, R L

    2012-06-12

    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  3. Spread-out Bragg peak and monitor units calculation with the Monte Carlo code MCNPX.

    Science.gov (United States)

    Hérault, J; Iborra, N; Serrano, B; Chauvel, P

    2007-02-01

    The aim of this work was to study the dosimetric potential of the Monte Carlo code MCNPX applied to the protontherapy field. For series of clinical configurations a comparison between simulated and experimental data was carried out, using the proton beam line of the MEDICYC isochronous cyclotron installed in the Centre Antoine Lacassagne in Nice. The dosimetric quantities tested were depth-dose distributions, output factors, and monitor units. For each parameter, the simulation reproduced accurately the experiment, which attests the quality of the choices made both in the geometrical description and in the physics parameters for beam definition. These encouraging results enable us today to consider a simplification of quality control measurements in the future. Monitor Units calculation is planned to be carried out with preestablished Monte Carlo simulation data. The measurement, which was until now our main patient dose calibration system, will be progressively replaced by computation based on the MCNPX code. This determination of Monitor Units will be controlled by an independent semi-empirical calculation.

  4. Progress and status of the OpenMC Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Romano, P. K.; Herman, B. R.; Horelik, N. E.; Forget, B.; Smith, K. [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Siegel, A. R. [Argonne National Laboratory, Theory and Computing Sciences and Nuclear Engineering Division (United States)

    2013-07-01

    The present work describes the latest advances and progress in the development of the OpenMC Monte Carlo code, an open-source code originating from the Massachusetts Institute of Technology. First, an overview of the development workflow of OpenMC is given. Various enhancements to the code such as real-time XML input validation, state points, plotting, OpenMP threading, and coarse mesh finite difference acceleration are described. (authors)

  5. Neutral Particle Transport in Cylindrical Plasma Simulated by a Monte Carlo Code

    Institute of Scientific and Technical Information of China (English)

    YU Deliang; YAN Longwen; ZHONG Guangwu; LU Jie; YI Ping

    2007-01-01

    A Monte Carlo code (MCHGAS) has been developed to investigate the neutral particle transport.The code can calculate the radial profile and energy spectrum of neutral particles in cylindrical plasmas.The calculation time of the code is dramatically reduced when the Splitting and Roulette schemes are applied. The plasma model of an infinite cylinder is assumed in the code,which is very convenient in simulating neutral particle transports in small and middle-sized tokamaks.The design of the multi-channel neutral particle analyser (NPA) on HL-2A can be optimized by using this code.

  6. SU-E-T-578: MCEBRT, A Monte Carlo Code for External Beam Treatment Plan Verifications

    Energy Technology Data Exchange (ETDEWEB)

    Chibani, O; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States); Eldib, A [Fox Chase Cancer Center, Philadelphia, PA (United States); Al-Azhar University, Cairo (Egypt)

    2014-06-01

    Purpose: Present a new Monte Carlo code (MCEBRT) for patient-specific dose calculations in external beam radiotherapy. The code MLC model is benchmarked and real patient plans are re-calculated using MCEBRT and compared with commercial TPS. Methods: MCEBRT is based on the GEPTS system (Med. Phys. 29 (2002) 835–846). Phase space data generated for Varian linac photon beams (6 – 15 MV) are used as source term. MCEBRT uses a realistic MLC model (tongue and groove, rounded ends). Patient CT and DICOM RT files are used to generate a 3D patient phantom and simulate the treatment configuration (gantry, collimator and couch angles; jaw positions; MLC sequences; MUs). MCEBRT dose distributions and DVHs are compared with those from TPS in absolute way (Gy). Results: Calculations based on the developed MLC model closely matches transmission measurements (pin-point ionization chamber at selected positions and film for lateral dose profile). See Fig.1. Dose calculations for two clinical cases (whole brain irradiation with opposed beams and lung case with eight fields) are carried out and outcomes are compared with the Eclipse AAA algorithm. Good agreement is observed for the brain case (Figs 2-3) except at the surface where MCEBRT dose can be higher by 20%. This is due to better modeling of electron contamination by MCEBRT. For the lung case an overall good agreement (91% gamma index passing rate with 3%/3mm DTA criterion) is observed (Fig.4) but dose in lung can be over-estimated by up to 10% by AAA (Fig.5). CTV and PTV DVHs from TPS and MCEBRT are nevertheless close (Fig.6). Conclusion: A new Monte Carlo code is developed for plan verification. Contrary to phantombased QA measurements, MCEBRT simulate the exact patient geometry and tissue composition. MCEBRT can be used as extra verification layer for plans where surface dose and tissue heterogeneity are an issue.

  7. The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy

    CERN Document Server

    Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F

    2010-01-01

    Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...

  8. TRIPOLI capabilities proved by a set of solved problems. [Monte Carlo neutron and gamma ray transport code

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, T.; Nimal, J.C. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France))

    1990-01-01

    The three-dimensional polycinetic Monte Carlo particle transport code TRIPOLI has been under development in the French Shielding Laboratory at Saclay since 1965. TRIPOLI-1 began to run in 1970 and became TRIPOLI-2 in 1978: since then its capabilities have been improved and many studies have been performed. TRIPOLI can treat stationary or time dependent problems in shielding and in neutronics. Some examples of solved problems are presented to demonstrate the many possibilities of the system. (author).

  9. Domain Decomposition strategy for pin-wise full-core Monte Carlo depletion calculation with the reactor Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)

    2016-06-15

    Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  10. Development of Monte Carlo code for Z-pinch driven fusion neutron imaging diagnosis system simulation%Z箍缩中子编码诊断系统模拟平台搭建

    Institute of Scientific and Technical Information of China (English)

    贾清刚; 张天奎; 张凤娜; 胡华四

    2013-01-01

    开发了基于Geant4的Z箍缩中子编码成像系统模拟平台,实现聚变中子编码成像诊断系统各关键部件的完整模拟.获得了低中子产额(约1010量级)下,中子经编码孔编码后在闪烁体阵列中形成的发光分布图像.利用维纳滤波、Richardson-Lucy(RL)及遗传算法(GA)对低中子产额下获得的极低信噪比图像进行重建,并对信噪比、中子产额及重建效果进行了对比研究,结果表明:遗传算法对低信噪比中子编码图像的重建具有很强的鲁棒性;中子编码图像的信噪比与遗传算法重建结果的准确性呈正比.%The model of Z-pinch driven fusion imaging diagnosis system was set up by a Monte Carlo code based on the Geant4 simulation toolkit. All physical processes that the reality involves are taken into consideration in simulation. The light image of low neutron yield (about 1010) pill was obtained. Three types of image reconstruction algorithm, i. e. Richardson-Lucy, Wiener filtering and genetic algorithm were employed to reconstruct the neutron image with a low signal to noise ratio (SNR) and yield. The effects of neutron yields and the SNR on reconstruction performance were discussed. The results show that genetic algorithm is very robust for reconstructing neutron images with a low SNR. And the index of reconstruction performance and the image correlation coefficient using genetic algorithm, are proportional to the SNR of the neutron coded image.

  11. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP

    Science.gov (United States)

    Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

    2014-06-01

    This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

  12. egs_brachy: a versatile and fast Monte Carlo code for brachytherapy

    Science.gov (United States)

    Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.

    2016-12-01

    egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.

  13. Update on the Development and Validation of MERCURY: A Modern, Monte Carlo Particle Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Procassini, R J; Taylor, J M; McKinley, M S; Greenman, G M; Cullen, D E; O' Brien, M J; Beck, B R; Hagmann, C A

    2005-06-06

    An update on the development and validation of the MERCURY Monte Carlo particle transport code is presented. MERCURY is a modern, parallel, general-purpose Monte Carlo code being developed at the Lawrence Livermore National Laboratory. During the past year, several major algorithm enhancements have been completed. These include the addition of particle trackers for 3-D combinatorial geometry (CG), 1-D radial meshes, 2-D quadrilateral unstructured meshes, as well as a feature known as templates for defining recursive, repeated structures in CG. New physics capabilities include an elastic-scattering neutron thermalization model, support for continuous energy cross sections and S ({alpha}, {beta}) molecular bound scattering. Each of these new physics features has been validated through code-to-code comparisons with another Monte Carlo transport code. Several important computer science features have been developed, including an extensible input-parameter parser based upon the XML data description language, and a dynamic load-balance methodology for efficient parallel calculations. This paper discusses the recent work in each of these areas, and describes a plan for future extensions that are required to meet the needs of our ever expanding user base.

  14. ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Jaafar EL Bakkali

    2016-07-01

    Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.

  15. A new Monte Carlo code for absorption simulation of laser-skin tissue interaction

    Institute of Scientific and Technical Information of China (English)

    Afshan Shirkavand; Saeed Sarkar; Marjaneh Hejazi; Leila Ataie-Fashtami; Mohammad Reza Alinaghizadeh

    2007-01-01

    In laser clinical applications, the process of photon absorption and thermal energy diffusion in the target tissue and its surrounding tissue during laser irradiation are crucial. Such information allows the selection of proper operating parameters such as laser power, and exposure time for optimal therapeutic. The Monte Carlo method is a useful tool for studying laser-tissue interaction and simulation of energy absorption in tissue during laser irradiation. We use the principles of this technique and write a new code with MATLAB 6.5, and then validate it against Monte Carlo multi layer (MCML) code. The new code is proved to be with good accuracy. It can be used to calculate the total power bsorbed in the region of interest. This can be combined for heat modelling with other computerized programs.

  16. Applications of FLUKA Monte Carlo code for nuclear and accelerator physics

    CERN Document Server

    Battistoni, Giuseppe; Brugger, Markus; Campanella, Mauro; Carboni, Massimo; Empl, Anton; Fasso, Alberto; Gadioli, Ettore; Cerutti, Francesco; Ferrari, Alfredo; Ferrari, Anna; Lantz, Matthias; Mairani, Andrea; Margiotta, M; Morone, Christina; Muraro, Silvia; Parodi, Katerina; Patera, Vincenzo; Pelliccioni, Maurizio; Pinsky, Lawrence; Ranft, Johannes; Roesler, Stefan; Rollet, Sofia; Sala, Paola R; Santana, Mario; Sarchiapone, Lucia; Sioli, Maximiliano; Smirnov, George; Sommerer, Florian; Theis, Christian; Trovati, Stefania; Villari, R; Vincke, Heinz; Vincke, Helmut; Vlachoudis, Vasilis; Vollaire, Joachim; Zapp, Neil

    2011-01-01

    FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on Linux. The validity of the physical models implemented in FLUKA has been benchmarked against a variety of experimental data over a wide energy range, from accelerator data to cosmic ray showers in the Earth atmosphere. FLUKA is widely used for studies related both to basic research and to applications in particle accelerators, radiation protection and dosimetry, including the specific issue of radiation damage in space missions, radiobiology (including radiotherapy) and cosmic ray calculations. After a short description of the main features that make FLUKA valuable for these topics, the present paper summarizes some of the recent applications of the FLUKA Monte Carlo code in the nuclear as well high energy physics. In particular it addresses such top...

  17. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2004-06-01

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  18. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory textto algebraic coding theory. In the first chapter, you'll gain insideknowledge of coding fundamentals, which is essential for a deeperunderstanding of state-of-the-art coding systems.This book is a quick reference for those who are unfamiliar withthis topic, as well as for use with specific applications such as cryptographyand communication. Linear error-correcting block codesthrough elementary principles span eleven chapters of the text.Cyclic codes, some finite field algebra, Goppa codes, algebraic decodingalgorithms, and applications in public-key cryptography andsecret-key cryptography are discussed, including problems and solutionsat the end of each chapter. Three appendices cover the Gilbertbound and some related derivations, a derivation of the Mac-Williams' identities based on the probability of undetected error,and two important tools for algebraic decoding-namely, the finitefield Fourier transform and the Euclidean algorithm for polynomials.

  19. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes

    CERN Document Server

    Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...

  20. Srna - Monte Carlo codes for proton transport simulation in combined and voxelized geometries

    Directory of Open Access Journals (Sweden)

    Ilić Radovan D.

    2002-01-01

    Full Text Available This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice.

  1. DgSMC-B code: A robust and autonomous direct simulation Monte Carlo code for arbitrary geometries

    Science.gov (United States)

    Kargaran, H.; Minuchehr, A.; Zolfaghari, A.

    2016-07-01

    In this paper, we describe the structure of a new Direct Simulation Monte Carlo (DSMC) code that takes advantage of combinatorial geometry (CG) to simulate any rarefied gas flows Medias. The developed code, called DgSMC-B, has been written in FORTRAN90 language with capability of parallel processing using OpenMP framework. The DgSMC-B is capable of handling 3-dimensional (3D) geometries, which is created with first-and second-order surfaces. It performs independent particle tracking for the complex geometry without the intervention of mesh. In addition, it resolves the computational domain boundary and volume computing in border grids using hexahedral mesh. The developed code is robust and self-governing code, which does not use any separate code such as mesh generators. The results of six test cases have been presented to indicate its ability to deal with wide range of benchmark problems with sophisticated geometries such as airfoil NACA 0012. The DgSMC-B code demonstrates its performance and accuracy in a variety of problems. The results are found to be in good agreement with references and experimental data.

  2. Accuracy Evaluation of Oncentra™ TPS in HDR Brachytherapy of Nasopharynx Cancer Using EGSnrc Monte Carlo Code

    Directory of Open Access Journals (Sweden)

    Hadad K

    2015-03-01

    Full Text Available Background: HDR brachytherapy is one of the commonest methods of nasopharyngeal cancer treatment. In this method, depending on how advanced one tumor is, 2 to 6 Gy dose as intracavitary brachytherapy is prescribed. Due to high dose rate and tumor location, accuracy evaluation of treatment planning system (TPS is particularly important. Common methods used in TPS dosimetry are based on computations in a homogeneous phantom. Heterogeneous phantoms, especially patient-specific voxel phantoms can increase dosimetric accuracy. Materials and Methods: In this study, using CT images taken from a patient and ctcreate-which is a part of the DOSXYZnrc computational code, patient-specific phantom was made. Dose distribution was plotted by DOSXYZnrc and compared with TPS one. Also, by extracting the voxels absorbed dose in treatment volume, dosevolume histograms (DVH was plotted and compared with Oncentra™ TPS DVHs. Results: The results from calculations were compared with data from Oncentra™ treatment planning system and it was observed that TPS calculation predicts lower dose in areas near the source, and higher dose in areas far from the source relative to MC code. Absorbed dose values in the voxels also showed that TPS reports D90 value is 40% higher than the Monte Carlo method. Conclusion: Today, most treatment planning systems use TG-43 protocol. This protocol may results in errors such as neglecting tissue heterogeneity, scattered radiation as well as applicator attenuation. Due to these errors, AAPM emphasized departing from TG-43 protocol and approaching new brachytherapy protocol TG-186 in which patient-specific phantom is used and heterogeneities are affected in dosimetry

  3. Effects of physics change in Monte Carlo code on electron pencil beam dose distributions

    Energy Technology Data Exchange (ETDEWEB)

    Toutaoui, Abdelkader, E-mail: toutaoui.aek@gmail.com [Departement de Physique Medicale, Centre de Recherche Nucleaire d' Alger, 2 Bd Frantz Fanon BP399 Alger RP, Algiers (Algeria); Khelassi-Toutaoui, Nadia, E-mail: nadiakhelassi@yahoo.fr [Departement de Physique Medicale, Centre de Recherche Nucleaire d' Alger, 2 Bd Frantz Fanon BP399 Alger RP, Algiers (Algeria); Brahimi, Zakia, E-mail: zsbrahimi@yahoo.fr [Departement de Physique Medicale, Centre de Recherche Nucleaire d' Alger, 2 Bd Frantz Fanon BP399 Alger RP, Algiers (Algeria); Chami, Ahmed Chafik, E-mail: chafik_chami@yahoo.fr [Laboratoire de Sciences Nucleaires, Faculte de Physique, Universite des Sciences et de la Technologie Houari Boumedienne, BP 32 El Alia, Bab Ezzouar, Algiers (Algeria)

    2012-01-15

    Pencil beam algorithms used in computerized electron beam dose planning are usually described using the small angle multiple scattering theory. Alternatively, the pencil beams can be generated by Monte Carlo simulation of electron transport. In a previous work, the 4th version of the Electron Gamma Shower (EGS) Monte Carlo code was used to obtain dose distributions from monoenergetic electron pencil beam, with incident energy between 1 MeV and 50 MeV, interacting at the surface of a large cylindrical homogeneous water phantom. In 2000, a new version of this Monte Carlo code has been made available by the National Research Council of Canada (NRC), which includes various improvements in its electron-transport algorithms. In the present work, we were interested to see if the new physics in this version produces pencil beam dose distributions very different from those calculated with oldest one. The purpose of this study is to quantify as well as to understand these differences. We have compared a series of pencil beam dose distributions scored in cylindrical geometry, for electron energies between 1 MeV and 50 MeV calculated with two versions of the Electron Gamma Shower Monte Carlo Code. Data calculated and compared include isodose distributions, radial dose distributions and fractions of energy deposition. Our results for radial dose distributions show agreement within 10% between doses calculated by the two codes for voxels closer to the pencil beam central axis, while the differences are up to 30% for longer distances. For fractions of energy deposition, the results of the EGS4 are in good agreement (within 2%) with those calculated by EGSnrc at shallow depths for all energies, whereas a slightly worse agreement (15%) is observed at deeper distances. These differences may be mainly attributed to the different multiple scattering for electron transport adopted in these two codes and the inclusion of spin effect, which produces an increase of the effective range of

  4. Development and validation of MCNPX-based Monte Carlo treatment plan verification system

    OpenAIRE

    Iraj Jabbari; Shahram Monadi

    2015-01-01

    A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation ...

  5. On the use of SERPENT Monte Carlo code to generate few group diffusion constants

    Energy Technology Data Exchange (ETDEWEB)

    Piovezan, Pamela, E-mail: pamela.piovezan@ctmsp.mar.mil.b [Centro Tecnologico da Marinha em Sao Paulo (CTMSP), Sao Paulo, SP (Brazil); Carluccio, Thiago; Domingos, Douglas Borges; Rossi, Pedro Russo; Mura, Luiz Felipe, E-mail: fermium@cietec.org.b, E-mail: thiagoc@ipen.b [Fermium Tecnologia Nuclear, Sao Paulo, SP (Brazil); Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The accuracy of diffusion reactor codes strongly depends on the quality of the groups constants processing. For many years, the generation of such constants was based on 1-D infinity cell transport calculations. Some developments using collision probability or the method of characteristics allow, nowadays, 2-D assembly group constants calculations. However, these 1-D and 2-D codes how some limitations as , for example, on complex geometries and in the neighborhood of heavy absorbers. On the other hand, since Monte Carlos (MC) codes provide accurate neutro flux distributions, the possibility of using these solutions to provide group constants to full-core reactor diffusion simulators has been recently investigated, especially for the cases in which the geometry and reactor types are beyond the capability of the conventional deterministic lattice codes. The two greatest difficulties on the use of MC codes to group constant generation are the computational costs and the methodological incompatibility between analog MC particle transport simulation and deterministic transport methods based in several approximations. The SERPENT code is a 3-D continuous energy MC transport code with built-in burnup capability that was specially optimized to generate these group constants. In this work, we present the preliminary results of using the SERPENT MC code to generate 3-D two-group diffusion constants for a PWR like assembly. These constants were used in the CITATION diffusion code to investigate the effects of the MC group constants determination on the neutron multiplication factor diffusion estimate. (author)

  6. Evaluation of CASMO-3 and HELIOS for Fuel Assembly Analysis from Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Hyung Jin; Song, Jae Seung; Lee, Chung Chan

    2007-05-15

    This report presents a study comparing deterministic lattice physics calculations with Monte Carlo calculations for LWR fuel pin and assembly problems. The study has focused on comparing results from the lattice physics code CASMO-3 and HELIOS against those from the continuous-energy Monte Carlo code McCARD. The comparisons include k{sub inf}, isotopic number densities, and pin power distributions. The CASMO-3 and HELIOS calculations for the k{sub inf}'s of the LWR fuel pin problems show good agreement with McCARD within 956pcm and 658pcm, respectively. For the assembly problems with Gadolinia burnable poison rods, the largest difference between the k{sub inf}'s is 1463pcm with CASMO-3 and 1141pcm with HELIOS. RMS errors for the pin power distributions of CASMO-3 and HELIOS are within 1.3% and 1.5%, respectively.

  7. Academic Training - The use of Monte Carlo radiation transport codes in radiation physics and dosimetry

    CERN Multimedia

    Françoise Benz

    2006-01-01

    2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 27, 28, 29 June 11:00-12:00 - TH Conference Room, bldg. 4 The use of Monte Carlo radiation transport codes in radiation physics and dosimetry F. Salvat Gavalda,Univ. de Barcelona, A. FERRARI, CERN-AB, M. SILARI, CERN-SC Lecture 1. Transport and interaction of electromagnetic radiation F. Salvat Gavalda,Univ. de Barcelona Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interaction models and multiple-scattering theories will be analyzed. Benchmark comparisons of simu...

  8. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  9. Calculations for a BWR Lattice with Adjacent Gadolinium Pins Using the Monte Carlo Cell Code Serpent v.1.1.7

    Directory of Open Access Journals (Sweden)

    Diego Ferraro

    2011-01-01

    Full Text Available Monte Carlo neutron transport codes are usually used to perform criticality calculations and to solve shielding problems due to their capability to model complex systems without major approximations. However, these codes demand high computational resources. The improvement in computer capabilities leads to several new applications of Monte Carlo neutron transport codes. An interesting one is to use this method to perform cell-level fuel assembly calculations in order to obtain few group constants to be used on core calculations. In the present work the VTT recently developed Serpent v.1.1.7 cell-oriented neutronic calculation code is used to perform cell calculations of a theoretical BWR lattice benchmark with burnable poisons, and the main results are compared to reported ones and with calculations performed with Condor v.2.61, the INVAP's neutronic collision probability cell code.

  10. 多群蒙卡输运与点燃耗耦合程序系统TRITON基准验证%Benchmark Verification of Multi-group Monte Carlo Transport and Point-Burnup Codes Coupling System TRITON

    Institute of Scientific and Technical Information of China (English)

    武祥; 若夕子; 于涛; 谢金森; 陈昊威

    2014-01-01

    TRITON couples multi group Monte Carlo Transport code KENO V. a and point-burnup code ORIGEN-S. It features adaptability on complex geometries,flexible processing ability on cross section and rapid calculating speed. Based on the thorium-based fuel cell benchmark of Idaho National Laboratory ( INL) ,the verification on TRITON burnup calcu-lation was performed,which showed good coincidence with the result of MOCUP code by INL. Furthermore, the results of burnup isotopes selection schemes in TRITON showed that,for thorium based fuel,only important nuclides on Th-U cycle was included,correct results can be obtained by TRITON. Conclusions in the present paper will support further applications of TRITON.%TRITON程序系统耦合了多群蒙特卡罗输运程序KENO V. a与点燃耗程序ORIGEN-S,具有几何适应性强、截面处理能力灵活、计算速度快等显著特点.本文基于爱达荷国家实验室( INL)钍基燃料元件燃耗基准题,开展了TRITON程序燃耗功能的验证,结果与INL采用MOCUP程序给出的结果吻合很好.同时,燃耗核素选取对TRITON计算结果的影响分析表明对于钍基燃料,只有在考虑Th-U循环重要核素的前提下,TRITON才能给出正确结果.上述结论为TRITON程序的应用奠定了基础.

  11. The Serpent Monte Carlo Code: Status, Development and Applications in 2013

    Science.gov (United States)

    Leppänen, Jaakko; Pusa, Maria; Viitanen, Tuomas; Valtavirta, Ville; Kaltiaisenaho, Toni

    2014-06-01

    The Serpent Monte Carlo reactor physics burnup calculation code has been developed at VTT Technical Research Centre of Finland since 2004, and is currently used in 100 universities and research organizations around the world. This paper presents the brief history of the project, together with the currently available methods and capabilities and plans for future work. Typical user applications are introduced in the form of a summary review on Serpent-related publications over the past few years.

  12. NEPHTIS: 2D/3D validation elements using MCNP4c and TRIPOLI4 Monte-Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Courau, T.; Girardi, E. [EDF R and D/SINETICS, 1av du General de Gaulle, F92141 Clamart CEDEX (France); Damian, F.; Moiron-Groizard, M. [DEN/DM2S/SERMA/LCA, CEA Saclay, F91191 Gif-sur-Yvette CEDEX (France)

    2006-07-01

    High Temperature Reactors (HTRs) appear as a promising concept for the next generation of nuclear power applications. The CEA, in collaboration with AREVA-NP and EDF, is developing a core modeling tool dedicated to the prismatic block-type reactor. NEPHTIS (Neutronics Process for HTR Innovating System) is a deterministic codes system based on a standard two-steps Transport-Diffusion approach (APOLLO2/CRONOS2). Validation of such deterministic schemes usually relies on Monte-Carlo (MC) codes used as a reference. However, when dealing with large HTR cores the fission source stabilization is rather poor with MC codes. In spite of this, it is shown in this paper that MC simulations may be used as a reference for a wide range of configurations. The first part of the paper is devoted to 2D and 3D MC calculations of a HTR core with control devices. Comparisons between MCNP4c and TRIPOLI4 MC codes are performed and show very consistent results. Finally, the last part of the paper is devoted to the code to code validation of the NEPHTIS deterministic scheme. (authors)

  13. ASCOT: redesigned Monte Carlo code for simulations of minority species in tokamak plasmas

    CERN Document Server

    Hirvijoki, Eero; Koskela, Tuomas; Kurki-Suonio, Taina; Miettunen, Juho; Sipilä, Seppo; Snicker, Antti; Äkäslompolo, Simppa

    2013-01-01

    A comprehensive description of methods for Monte Carlo studies of fast ions and impurity species in tokamak plasmas is presented. The described methods include Hamiltonian orbit-following in particle and guiding center phase space, test particle or guiding center solution of the kinetic equation applying stochastic differential equations in the presence of Coulomb collisions, Neoclassical tearing modes and Alfv\\'en eigenmodes as electromagnetic perturbations relevant for fast ions, together with plasma flow and atomic reactions relevant for impurity studies. Applying the methods, a complete reimplementation of a well-established minority species code is carried out as a response both to the increase in computing power during the last twenty years and to the weakly structured growth of the previous code which has made implementation of additional models impractical. Also, a thorough benchmark between the previous code and the reimplementation is accomplished, showing good agreement between the codes.

  14. Validation of deterministic and Monte Carlo codes for neutronics calculation of the IRT-type research reactor

    Science.gov (United States)

    Shchurovskaya, M. V.; Alferov, V. P.; Geraskin, N. I.; Radaev, A. I.

    2017-01-01

    The results of the validation of a research reactor calculation using Monte Carlo and deterministic codes against experimental data and based on code-to-code comparison are presented. The continuous energy Monte Carlo code MCU-PTR and the nodal diffusion-based deterministic code TIGRIS were used for full 3-D calculation of the IRT MEPhI research reactor. The validation included the investigations for the reactor with existing high enriched uranium (HEU, 90 w/o) fuel and low enriched uranium (LEU, 19.7 w/o, U-9%Mo) fuel.

  15. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    Energy Technology Data Exchange (ETDEWEB)

    Procassini, R.J. [Lawrence Livermore National lab., CA (United States)

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  16. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yi-Chun [Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Taiwan (China); Liu, Yuan-Hao, E-mail: yhl.taiwan@gmail.com [Boron Neutron Capture Therapy Center, Nuclear Science and Technology Development Center, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu City 30013, Taiwan (China); Nievaart, Sander [Institute for Energy, Joint Research Centre, European Commission, Petten (Netherlands); Chen, Yen-Fu [Department of Engineering and System Science, National Tsing Hua University, Taiwan (China); Wu, Shu-Wei; Chou, Wen-Tsae [Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Taiwan (China); Jiang, Shiang-Huei [Institute of Nuclear Engineering and Science, National Tsing Hua University, Taiwan (China)

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary {sup 60}Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the {sup 60}Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  17. FAST CONVERGENT MONTE CARLO RECEIVER FOR OFDM SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Wu Lili; Liao Guisheng; Bao Zheng; Shang Yong

    2005-01-01

    The paper investigates the problem of the design of an optimal Orthogonal Frequency Division Multiplexing (OFDM) receiver against unknown frequency selective fading. A fast convergent Monte Carlo receiver is proposed. In the proposed method, the Markov Chain Monte Carlo (MCMC) methods are employed for the blind Bayesian detection without channel estimation. Meanwhile, with the exploitation of the characteristics of OFDM systems, two methods are employed to improve the convergence rate and enhance the efficiency of MCMC algorithms.One is the integration of the posterior distribution function with respect to the associated channel parameters, which is involved in the derivation of the objective distribution function; the other is the intra-symbol differential coding for the elimination of the bimodality problem resulting from the presence of unknown fading channels. Moreover, no matrix inversion is needed with the use of the orthogonality property of OFDM modulation and hence the computational load is significantly reduced. Computer simulation results show the effectiveness of the fast convergent Monte Carlo receiver.

  18. Experimental validation of the DPM Monte Carlo code using minimally scattered electron beams in heterogeneous media

    Science.gov (United States)

    Chetty, Indrin J.; Moran, Jean M.; Nurushev, Teamor S.; McShan, Daniel L.; Fraass, Benedick A.; Wilderman, Scott J.; Bielajew, Alex F.

    2002-06-01

    A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for electron beam dose calculations in heterogeneous media. Measurements were made using 10 MeV and 50 MeV minimally scattered, uncollimated electron beams from a racetrack microtron. Source distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber scans and then benchmarked against measurements in a homogeneous water phantom. The in-air spatial distributions were found to have FWHM of 4.7 cm and 1.3 cm, at 100 cm from the source, for the 10 MeV and 50 MeV beams respectively. Energy spectra for the electron beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. Profile measurements were made using an ion chamber in a water phantom with slabs of lung or bone-equivalent materials submerged at various depths. DPM calculations are, on average, within 2% agreement with measurement for all geometries except for the 50 MeV incident on a 6 cm lung-equivalent slab. Measurements using approximately monoenergetic, 50 MeV, 'pencil-beam'-type electrons in heterogeneous media provide conditions for maximum electronic disequilibrium and hence present a stringent test of the code's electron transport physics; the agreement noted between calculation and measurement illustrates that the DPM code is capable of accurate dose calculation even under such conditions.

  19. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  20. METHES: A Monte Carlo collision code for the simulation of electron transport in low temperature plasmas

    Science.gov (United States)

    Rabie, M.; Franck, C. M.

    2016-06-01

    We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.

  1. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  2. Srna-Monte Carlo codes for proton transport simulation in combined and voxelized geometries

    CERN Document Server

    Ilic, R D; Stankovic, S J

    2002-01-01

    This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D) dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtaine...

  3. Comparison of Geant4-DNA simulation of S-values with other Monte Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    André, T. [Université Bordeaux 1, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Morini, F. [Research Group of Theoretical Chemistry and Molecular Modelling, Hasselt University, Agoralaan Gebouw D, B-3590 Diepenbeek (Belgium); Karamitros, M. [Université Bordeaux 1, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, INCIA, UMR 5287, F-33400 Talence (France); Delorme, R. [LPSC, Université Joseph Fourier Grenoble 1, CNRS/IN2P3, Grenoble INP, 38026 Grenoble (France); CEA, LIST, F-91191 Gif-sur-Yvette (France); Le Loirec, C. [CEA, LIST, F-91191 Gif-sur-Yvette (France); Campos, L. [Departamento de Física, Universidade Federal de Sergipe, São Cristóvão (Brazil); Champion, C. [Université Bordeaux 1, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Groetz, J.-E.; Fromm, M. [Université de Franche-Comté, Laboratoire Chrono-Environnement, UMR CNRS 6249, Besançon (France); Bordage, M.-C. [Laboratoire Plasmas et Conversion d’Énergie, UMR 5213 CNRS-INPT-UPS, Université Paul Sabatier, Toulouse (France); Perrot, Y. [Laboratoire de Physique Corpusculaire, UMR 6533, Aubière (France); Barberet, Ph. [Université Bordeaux 1, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); and others

    2014-01-15

    Monte Carlo simulations of S-values have been carried out with the Geant4-DNA extension of the Geant4 toolkit. The S-values have been simulated for monoenergetic electrons with energies ranging from 0.1 keV up to 20 keV, in liquid water spheres (for four radii, chosen between 10 nm and 1 μm), and for electrons emitted by five isotopes of iodine (131, 132, 133, 134 and 135), in liquid water spheres of varying radius (from 15 μm up to 250 μm). The results have been compared to those obtained from other Monte Carlo codes and from other published data. The use of the Kolmogorov–Smirnov test has allowed confirming the statistical compatibility of all simulation results.

  4. Neutron cross-section probability tables in TRIPOLI-3 Monte Carlo transport code

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, S.H.; Vergnaud, T.; Nimal, J.C. [Commissariat a l`Energie Atomique, Gif-sur-Yvette (France). Lab. d`Etudes de Protection et de Probabilite

    1998-03-01

    Neutron transport calculations need an accurate treatment of cross sections. Two methods (multi-group and pointwise) are usually used. A third one, the probability table (PT) method, has been developed to produce a set of cross-section libraries, well adapted to describe the neutron interaction in the unresolved resonance energy range. Its advantage is to present properly the neutron cross-section fluctuation within a given energy group, allowing correct calculation of the self-shielding effect. Also, this PT cross-section representation is suitable for simulation of neutron propagation by the Monte Carlo method. The implementation of PTs in the TRIPOLI-3 three-dimensional general Monte Carlo transport code, developed at Commissariat a l`Energie Atomique, and several validation calculations are presented. The PT method is proved to be valid not only in the unresolved resonance range but also in all the other energy ranges.

  5. A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX

    OpenAIRE

    Keyvan Jabbari; Jan Seuntjens

    2014-01-01

    An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft t...

  6. Automated importance generation and biasing techniques for Monte Carlo shielding techniques by the TRIPOLI-3 code

    Energy Technology Data Exchange (ETDEWEB)

    Both, J.P.; Nimal, J.C.; Vergnaud, T. (CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France). Service d' Etudes des Reacteurs et de Mathematiques Appliquees)

    1990-01-01

    We discuss an automated biasing procedure for generating the parameters necessary to achieve efficient Monte Carlo biasing shielding calculations. The biasing techniques considered here are exponential transform and collision biasing deriving from the concept of the biased game based on the importance function. We use a simple model of the importance function with exponential attenuation as the distance to the detector increases. This importance function is generated on a three-dimensional mesh including geometry and with graph theory algorithms. This scheme is currently being implemented in the third version of the neutron and gamma ray transport code TRIPOLI-3. (author).

  7. The EGS5 Code System

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users of EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version

  8. Antiproton annihilation physics in the Monte Carlo particle transport code SHIELD-HIT12A

    Energy Technology Data Exchange (ETDEWEB)

    Taasti, Vicki Trier; Knudsen, Helge [Dept. of Physics and Astronomy, Aarhus University (Denmark); Holzscheiter, Michael H. [Dept. of Physics and Astronomy, Aarhus University (Denmark); Dept. of Physics and Astronomy, University of New Mexico (United States); Sobolevsky, Nikolai [Institute for Nuclear Research of the Russian Academy of Sciences (INR), Moscow (Russian Federation); Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Thomsen, Bjarne [Dept. of Physics and Astronomy, Aarhus University (Denmark); Bassler, Niels, E-mail: bassler@phys.au.dk [Dept. of Physics and Astronomy, Aarhus University (Denmark)

    2015-03-15

    The Monte Carlo particle transport code SHIELD-HIT12A is designed to simulate therapeutic beams for cancer radiotherapy with fast ions. SHIELD-HIT12A allows creation of antiproton beam kernels for the treatment planning system TRiP98, but first it must be benchmarked against experimental data. An experimental depth dose curve obtained by the AD-4/ACE collaboration was compared with an earlier version of SHIELD-HIT, but since then inelastic annihilation cross sections for antiprotons have been updated and a more detailed geometric model of the AD-4/ACE experiment was applied. Furthermore, the Fermi–Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence of these updates is tested by comparing simulated data with the antiproton depth dose curve in water. It is found that the implementation of these new capture probabilities results in an overestimation of the depth dose curve in the Bragg peak. This can be mitigated by scaling the antiproton collision cross sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi–Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We conclude that more experimental cross section data are needed in the lower energy range in order to resolve this contradiction, ideally combined with more rigorous models for annihilation on compounds.

  9. Quality control of the new Monte Carlo code Multi plan for Cyberknife planner; Control de calidad del nuevo codigo Monte Carlo de planificador multiplan para Cyberknife

    Energy Technology Data Exchange (ETDEWEB)

    Fayos Ferrer, F.; Antolin Sanmartin, E.; Simon de Blas, R.; Palazon Cano, I.; Bertomeu Padin, T.; Gutierrez Sarraga, J.; Rey Portoles, G.

    2011-07-01

    This paper is subjected to various tests including Monte Carlo dosimetric the code in the latest versions of Multi plan Accuracy planner. They compare their results and Ray-Tracing Algorithm (RT), present from the earliest versions, with the experimental results obtained by photographic dosimetry and ionization chamber measurements.

  10. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S.R.; Cummings, J.C. [Los Alamos National Lab., NM (United States); Nolen, S.D. [Texas A and M Univ., College Station, TX (United States)]|[Los Alamos National Lab., NM (United States)

    1997-05-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and {alpha}-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute {alpha}-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed.

  11. Preliminary analyses for HTTR`s start-up physics tests by Monte Carlo code MVP

    Energy Technology Data Exchange (ETDEWEB)

    Nojiri, Naoki [Science and Technology Agency, Tokyo (Japan); Nakano, Masaaki; Ando, Hiroei; Fujimoto, Nozomu; Takeuchi, Mitsuo; Fujisaki, Shingo; Yamashita, Kiyonobu

    1998-08-01

    Analyses of start-up physics tests for High Temperature Engineering Test Reactor (HTTR) have been carried out by Monte Carlo code MVP based on continuous energy method. Heterogeneous core structures were modified precisely, such as the fuel compacts, fuel rods, coolant channels, burnable poisons, control rods, control rod insertion holes, reserved shutdown pellet insertion holes, gaps between graphite blocks, etc. Such precise modification of the core structures was difficult with diffusion calculation. From the analytical results, the followings were confirmed; The first criticality will be achieved around 16 fuel columns loaded. The reactivity at the first criticality can be controlled by only one control rod located at the center of the core with other fifteen control rods fully withdrawn. The excess reactivity, reactor shutdown margin and control rod criticality positions have been evaluated. These results were used for planning of the start-up physics tests. This report presents analyses of start-up physics tests for HTTR by MVP code. (author)

  12. Generation of XS library for the reflector of VVER reactor core using Monte Carlo code Serpent

    Science.gov (United States)

    Usheva, K. I.; Kuten, S. A.; Khruschinsky, A. A.; Babichev, L. F.

    2017-01-01

    A physical model of the radial and axial reflector of VVER-1200-like reactor core has been developed. Five types of radial reflector with different material composition exist for the VVER reactor core and 1D and 2D models were developed for all of them. Axial top and bottom reflectors are described by the 1D model. A two-group XS library for diffusion code DYN3D has been generated for all types of reflectors by using Serpent 2 Monte Carlo code. Power distribution in the reactor core calculated in DYN3D is flattened in the core central region to more extent in the 2D model of the radial reflector than in its 1D model.

  13. Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M J; Joy, K I; Procassini, R J; Greenman, G M

    2008-12-07

    Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.

  14. The use of Monte Carlo radiation transport codes in radiation physics and dosimetry

    CERN Document Server

    CERN. Geneva; Ferrari, Alfredo; Silari, Marco

    2006-01-01

    Transport and interaction of electromagnetic radiation Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. In these codes, photon transport is simulated by using the detailed scheme, i.e., interaction by interaction. Detailed simulation is easy to implement, and the reliability of the results is only limited by the accuracy of the adopted cross sections. Simulations of electron and positron transport are more difficult, because these particles undergo a large number of interactions in the course of their slowing down. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interacti...

  15. Monte Carlo simulation of MOSFET detectors for high-energy photon beams using the PENELOPE code

    Energy Technology Data Exchange (ETDEWEB)

    Panettieri, Vanessa [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, 08028 Barcelona (Spain); Duch, Maria Amor [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, 08028 Barcelona (Spain); Jornet, Nuria [Servei de RadiofIsica i Radioproteccio, Hospital de la Santa Creu i San Pau Sant Antoni Maria Claret 167, 08025 Barcelona (Spain); Ginjaume, Merce [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, 08028 Barcelona (Spain); Carrasco, Pablo [Servei de RadiofIsica i Radioproteccio, Hospital de la Santa Creu i San Pau Sant Antoni Maria Claret 167, 08025 Barcelona (Spain); Badal, Andreu [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, 08028 Barcelona (Spain); Ortega, Xavier [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, 08028 Barcelona (Spain); Ribas, Montserrat [Servei de RadiofIsica i Radioproteccio, Hospital de la Santa Creu i San Pau Sant Antoni Maria Claret 167, 08025 Barcelona (Spain)

    2007-01-07

    The aim of this work was the Monte Carlo (MC) simulation of the response of commercially available dosimeters based on metal oxide semiconductor field effect transistors (MOSFETs) for radiotherapeutic photon beams using the PENELOPE code. The studied Thomson and Nielsen TN-502-RD MOSFETs have a very small sensitive area of 0.04 mm{sup 2} and a thickness of 0.5 {mu}m which is placed on a flat kapton base and covered by a rounded layer of black epoxy resin. The influence of different metallic and Plastic water(TM) build-up caps, together with the orientation of the detector have been investigated for the specific application of MOSFET detectors for entrance in vivo dosimetry. Additionally, the energy dependence of MOSFET detectors for different high-energy photon beams (with energy >1.25 MeV) has been calculated. Calculations were carried out for simulated 6 MV and 18 MV x-ray beams generated by a Varian Clinac 1800 linear accelerator, a Co-60 photon beam from a Theratron 780 unit, and monoenergetic photon beams ranging from 2 MeV to 10 MeV. The results of the validation of the simulated photon beams show that the average difference between MC results and reference data is negligible, within 0.3%. MC simulated results of the effect of the build-up caps on the MOSFET response are in good agreement with experimental measurements, within the uncertainties. In particular, for the 18 MV photon beam the response of the detectors under a tungsten cap is 48% higher than for a 2 cm Plastic water(TM) cap and approximately 26% higher when a brass cap is used. This effect is demonstrated to be caused by positron production in the build-up caps of higher atomic number. This work also shows that the MOSFET detectors produce a higher signal when their rounded side is facing the beam (up to 6%) and that there is a significant variation (up to 50%) in the response of the MOSFET for photon energies in the studied energy range. All the results have shown that the PENELOPE code system

  16. Experimental validation of the DPM Monte Carlo code using minimally scattered electron beams in heterogeneous media

    Energy Technology Data Exchange (ETDEWEB)

    Chetty, Indrin J. [Department of Radiation Oncology, University of Michigan, Ann Arbor, MI (United States)]. E-mail: indrin@med.umich.edu; Moran, Jean M.; Nurushev, Teamor S.; McShan, Daniel L.; Fraass, Benedick A. [Department of Radiation Oncology, University of Michigan, Ann Arbor, MI (United States); Wilderman, Scott J.; Bielajew, Alex F. [Department of Nuclear Engineering, University of Michigan, Ann Arbor, MI (United States)

    2002-06-07

    A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for electron beam dose calculations in heterogeneous media. Measurements were made using 10 MeV and 50 MeV minimally scattered, uncollimated electron beams from a racetrack microtron. Source distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber scans and then benchmarked against measurements in a homogeneous water phantom. The in-air spatial distributions were found to have FWHM of 4.7 cm and 1.3 cm, at 100 cm from the source, for the 10 MeV and 50 MeV beams respectively. Energy spectra for the electron beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. Profile measurements were made using an ion chamber in a water phantom with slabs of lung or bone-equivalent materials submerged at various depths. DPM calculations are, on average, within 2% agreement with measurement for all geometries except for the 50 MeV incident on a 6 cm lung-equivalent slab. Measurements using approximately monoenergetic, 50 MeV, 'pencil-beam'-type electrons in heterogeneous media provide conditions for maximum electronic disequilibrium and hence present a stringent test of the code's electron transport physics; the agreement noted between calculation and measurement illustrates that the DPM code is capable of accurate dose calculation even under such conditions. (author)

  17. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Takemiya, Hiroshi [Japan Atomic Energy Research Inst., Tokyo (Japan); Kawasaki, Takuji [Fuji Research Institute Corporation, Tokyo (Japan)

    2001-01-01

    In parallel processing of Monte Carlo(MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)

  18. New Codes for Spectral Amplitude Coding Optical CDMA Systems

    Directory of Open Access Journals (Sweden)

    Hassan Yousif Ahmed

    2011-03-01

    Full Text Available A new code structure with zero in-phase cross correlation for spectral amplitude coding optical code division multiple access (SAC-OCDMA system is proposed, and called zero vectors combinatorial (ZVC. This code is constructed in a simple algebraic way using Euclidean vectors and combinatorial theories based on the relationship between the number of users N and the weight W. One of the important properties of this code is that the maximum cross correlation (CC is always zero, which means that multi-user interference (MUI and phase induced intensity noise (PIIN are reduced. Bit error rate (BER performance is compared with previous reported codes. Therefore, theoretically, we demonstrate the performance of ZVC code with the related equations. In addition, the structure of the encoder/decoder based on fiber Bragg gratings (FBGs and the proposed system have been analyzed theoretically by taking into consideration the effects of some noises. The results characterizing BER with respect to the total number of active users show that ZVC code offers a significantly improved performance over previous reported codes by supporting large numbers of users at BER≥ 10-9. A comprehensive simulation study has been carried out using a commercial optical system simulator “VPI™”. Moreover, it was shown that the proposed code managed to reduce the hardware complexity and eventually the cost.

  19. COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics

    Science.gov (United States)

    Barletta, Paolo

    2012-02-01

    Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability

  20. Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual; Tripoli-3: code de transport des particules neutres par la methode de monte carlo - version 3.5 - manuel d'utilisation

    Energy Technology Data Exchange (ETDEWEB)

    Vergnaud, Th.; Nimal, J.C.; Chiron, M

    2001-07-01

    The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)

  1. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    Science.gov (United States)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  2. X-ray simulation with the Monte Carlo code PENELOPE. Application to Quality Control.

    Science.gov (United States)

    Pozuelo, F; Gallardo, S; Querol, A; Verdú, G; Ródenas, J

    2012-01-01

    A realistic knowledge of the energy spectrum is very important in Quality Control (QC) of X-ray tubes in order to reduce dose to patients. However, due to the implicit difficulties to measure the X-ray spectrum accurately, it is not normally obtained in routine QC. Instead, some parameters are measured and/or calculated. PENELOPE and MCNP5 codes, based on the Monte Carlo method, can be used as complementary tools to verify parameters measured in QC. These codes allow estimating Bremsstrahlung and characteristic lines from the anode taking into account specific characteristics of equipment. They have been applied to simulate an X-ray spectrum. Results are compared with theoretical IPEM 78 spectrum. A sensitivity analysis has been developed to estimate the influence on simulated spectra of important parameters used in simulation codes. With this analysis it has been obtained that the FORCE factor is the most important parameter in PENELOPE simulations. FORCE factor, which is a variance reduction method, improves the simulation but produces hard increases of computer time. The value of FORCE should be optimized so that a good agreement of simulated and theoretical spectra is reached, but with a reduction of computer time. Quality parameters such as Half Value Layer (HVL) can be obtained with the PENELOPE model developed, but FORCE takes such a high value that computer time is hardly increased. On the other hand, depth dose assessment can be achieved with acceptable results for small values of FORCE.

  3. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  4. THE INVESTIGATION OF BURNUP CHARACTERISTICS USING THE SERPENT MONTE CARLO CODE FOR A SODIUM COOLED FAST REACTOR

    Directory of Open Access Journals (Sweden)

    MEHMET E. KORKMAZ

    2014-06-01

    Full Text Available In this research, we investigated the burnup characteristics and the conversion of fertile 232Th into fissile 233U in the core of a Sodium-Cooled Fast Reactor (SFR. The SFR fuel assemblies were designed for burning 232Th fuel (fuel pin 1 and 233U fuel (fuel pin 2 and include mixed minor actinide compositions. Monte Carlo simulations were performed using Serpent Code1.1.19 to compare with CRAM (Chebyshev Rational Approximation Method and TTA (Transmutation Trajectory Analysis method in the burnup calculation mode. The total heating power generated in the system was assumed to be 2000 MWth. During the reactor operation period of 600 days, the effective multiplication factor (keff was between 0.964 and 0.954 and peaking factor is 1.88867.

  5. The investigation of burnup characteristics using the serpent Monte Carlo code for a sodium cooled fast reactor

    Energy Technology Data Exchange (ETDEWEB)

    Korkmaz, Mehmet E.; Agar, Osman [Karamanoglu Mehmetbey University, Faculty of Kamil Oezdag Science, Karaman (Turkmenistan)

    2014-06-15

    In this research, we investigated the burnup characteristics and the conversion of fertile {sup 232}Th into fissile {sup 233}U in the core of a Sodium-Cooled Fast Reactor (SFR). The SFR fuel assemblies were designed for burning {sup 232}Th fuel (fuel pin 1) and {sup 233}U fuel (fuel pin 2) and include mixed minor actinide compositions. Monte Carlo simulations were performed using Serpent Code1.1.19 to compare with CRAM (Chebyshev Rational Approximation Method) and TTA (Transmutation Trajectory Analysis) method in the burnup calculation mode. The total heating power generated in the system was assumed to be 2000 MWth. During the reactor operation period of 600 days, the effective multiplication factor (keff) was between 0.964 and 0.954 and peaking factor is 1.88867.

  6. PENELOPE, and algorithm and computer code for Monte Carlo simulation of electron-photon showers

    Energy Technology Data Exchange (ETDEWEB)

    Salvat, F.; Fernandez-Varea, J.M.; Baro, J.; Sempau, J.

    1996-10-01

    The FORTRAN 77 subroutine package PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary for a wide energy range, from similar{sub t}o 1 KeV to several hundred MeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A simple geometry package permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres cylinders, etc. This report is intended not only to serve as a manual of the simulation package, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm.

  7. PENELOPE, an algorithm and computer code for Monte Carlo simulation of electron-photon showers

    Energy Technology Data Exchange (ETDEWEB)

    Salvat, F.; Fernandez-Varea, J.M.; Baro, J.; Sempau, J.

    1996-07-01

    The FORTRAN 77 subroutine package PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary for a wide energy range, from 1 keV to several hundred MeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A simple geometry package permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres, cylinders, etc. This report is intended not only to serve as a manual of the simulation package, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm. (Author) 108 refs.

  8. Simulation of a nuclear densimeter using the Monte Carlo MCNP-4C code; Simulacao de um densimetro nuclear utilizando o codigo Monte Carlo MCNP-4C

    Energy Technology Data Exchange (ETDEWEB)

    Penna, Rodrigo [UNI-BH, Belo Horizonte, MG (Brazil). Dept. de Ciencias Biologicas, Ambientais e da Saude (DCBAS/DCET); Silva, Clemente Jose Gusmao Carneiro da [Universidade Estadual de Santa Cruz, UESC, Ilheus, BA (Brazil); Gomes, Paulo Mauricio Costa [Universidade FUMEC, Belo Horizonte, MG (Brazil)

    2008-07-01

    Viability of building a nuclear wood densimeter based on low energy photons Compton scattering was done using Monte Carlo code (MCNP- 4C). It is simulated a collimated 60 keV beam of gamma rays emitted by {sup 241}Am source reaching wood blocks. Backscattered radiation by these blocks was calculated. Photons scattered were correlated with blocks of different wood densities. Results showed a linear relationship on wood density and scattered photons, therefore the viability of this wood densimeter. (author)

  9. SCALE Code System 6.2.1

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.

  10. Applicability of the SCALE code system to MOX fuel transport systems for criticality safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Toshihiro; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Hayashi, Toshiaki; Takasugi, Masahiro; Natsume, Toshihiro; Tsuda, Kazuaki

    1996-11-01

    In order to ascertain feasibilities of the SCALE code system for MOX fuel transport systems, criticality analyses were performed for MOX fuel (Pu enrichment; 3.0 wt.%) criticality experiments at JAERI`s TCA and for infinite fuel rod arrays as parameters of Pu enrichment and lattice pitch. The comparison with a combination of the continuous energy Monte Carlo code MCNP and JENDL-3.2 indicated that the SCALE code system with GAM-THERMOS 123-group library can produce feasible results. Though HANSEN-ROACH 16-group library gives poorer results for MOS fuel transport systems, the errors are conservative except for high enriched fuels. (author)

  11. MC21 v.6.0 - A Continuous-Energy Monte Carlo Particle Transport Code with Integrated Reactor Feedback Capabilities

    Science.gov (United States)

    Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.

    2014-06-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each

  12. Numerical system utilising a Monte Carlo calculation method for accurate dose assessment in radiation accidents.

    Science.gov (United States)

    Takahashi, F; Endo, A

    2007-01-01

    A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.

  13. Criticality coefficient calculation for a small PWR using Monte Carlo Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Trombetta, Debora M.; Su, Jian, E-mail: dtrombetta@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Chirayath, Sunil S., E-mail: sunilsc@tamu.edu [Department of Nuclear Engineering and Nuclear Security Science and Policy Institute, Texas A and M University, TX (United States)

    2015-07-01

    Computational models of reactors are increasingly used to predict nuclear reactor physics parameters responsible for reactivity changes which could lead to accidents and losses. In this work, preliminary results for criticality coefficient calculation using the Monte Carlo transport code MCNPX were presented for a small PWR. The computational modeling developed consists of the core with fuel elements, radial reflectors, and control rods inside a pressure vessel. Three different geometries were simulated, a single fuel pin, a fuel assembly and the core, with the aim to compare the criticality coefficients among themselves.The criticality coefficients calculated were: Doppler Temperature Coefficient, Coolant Temperature Coefficient, Coolant Void Coefficient, Power Coefficient, and Control Rod Worth. The coefficient values calculated by the MCNP code were compared with literature results, showing good agreement with reference data, which validate the computational model developed and allow it to be used to perform more complex studies. Criticality Coefficient values for the three simulations done had little discrepancy for almost all coefficients investigated, the only exception was the Power Coefficient. Preliminary results presented show that simple modelling as a fuel assembly can describe changes at almost all the criticality coefficients, avoiding the need of a complex core simulation. (author)

  14. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  15. An Interactive Concatenated Turbo Coding System

    Science.gov (United States)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  16. A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator

    Science.gov (United States)

    Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein

    2014-01-01

    Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model. PMID:24696804

  17. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...

  18. Investigation of Nuclear Data Libraries with TRIPOLI-4 Monte Carlo Code for Sodium-cooled Fast Reactors

    Science.gov (United States)

    Lee, Y.-K.; Brun, E.

    2014-04-01

    The Sodium-cooled fast neutron reactor ASTRID is currently under design and development in France. Traditional ECCO/ERANOS fast reactor code system used for ASTRID core design calculations relies on multi-group JEFF-3.1.1 data library. To gauge the use of ENDF/B-VII.0 and JEFF-3.1.1 nuclear data libraries in the fast reactor applications, two recent OECD/NEA computational benchmarks specified by Argonne National Laboratory were calculated. Using the continuous-energy TRIPOLI-4 Monte Carlo transport code, both ABR-1000 MWth MOX core and metallic (U-Pu) core were investigated. Under two different fast neutron spectra and two data libraries, ENDF/B-VII.0 and JEFF-3.1.1, reactivity impact studies were performed. Using JEFF-3.1.1 library under the BOEC (Beginning of equilibrium cycle) condition, high reactivity effects of 808 ± 17 pcm and 1208 ± 17 pcm were observed for ABR-1000 MOX core and metallic core respectively. To analyze the causes of these differences in reactivity, several TRIPOLI-4 runs using mixed data libraries feature allow us to identify the nuclides and the nuclear data accounting for the major part of the observed reactivity discrepancies.

  19. Feasibility study of photo-neutron flux in various irradiation channels of Ghana MNSR using a Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Birikorang, S.A., E-mail: anddydat@yahoo.com [Department of Nuclear Engineering and Material Science, School of Nuclear and Allied Sciences (SNAS), University of Ghana, P.O. Box AE 1, Atomic Energy, Accra (Ghana); Akaho, E.H.K.; Nyarko, B.J.B. [National Nuclear Research Institute, Ghana Atomic Energy Commission, P.O. Box LG 80, Legon, Accra-Ghana (Ghana); Ampomah-Amoako, E.; Seth, Debrah K.; Gyabour, R.A.; Sogbgaji, R.B.M. [Department of Nuclear Engineering and Material Science, School of Nuclear and Allied Sciences (SNAS), University of Ghana, P.O. Box AE 1, Atomic Energy, Accra (Ghana)

    2011-07-15

    Highlights: > The photo-neutron source was investigated within Ghana MNSR irradiation channels. > Irradiation channels under study were inner, outer and the fission chamber. > Thermal rated power at sub-critical state was estimated. > Neutron flux variation was investigated within the channels. > MCNP code has been used to investigate the flux variation. - Abstract: Computer simulation was carried out for photo-neutron source variation in outer irradiation channel, inner irradiation channels and the fission channel of a tank-in-pool reactor, a Miniature Neutron Source Reactor (MNSR) in sub-critical condition. Evaluation of the photo-neutron was done after the reactor has been in sub-critical condition for three month period using Monte Carlo Neutron Particle (MCNP) code. Neutron flux monitoring from the Micro Computer Control Loop System (MCCLS) was also investigated at sub-critical condition. The recorded neutron fluxes from the MCCLS after investigations were used to calculate the power of the reactor at sub-critical state. The computed power at sub-critical state was used to normalize the un-normalized results from the MCNP.

  20. Domain Decomposition Strategy for Pin-wise Full-Core Monte Carlo Depletion Calculation with the Reactor Monte Carlo Code

    Directory of Open Access Journals (Sweden)

    Jingang Liang

    2016-06-01

    Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  1. Assessment of ocular beta radiation dose distribution due to 106Ru/106Rh brachytherapy applicators using MCNPX Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Nilseia Aparecida Barbosa

    2014-08-01

    Full Text Available Purpose: Melanoma at the choroid region is the most common primary cancer that affects the eye in adult patients. Concave ophthalmic applicators with 106Ru/106Rh beta sources are the more used for treatment of these eye lesions, mainly lesions with small and medium dimensions. The available treatment planning system for 106Ru applicators is based on dose distributions on a homogeneous water sphere eye model, resulting in a lack of data in the literature of dose distributions in the eye radiosensitive structures, information that may be crucial to improve the treatment planning process, aiming the maintenance of visual acuity. Methods: The Monte Carlo code MCNPX was used to calculate the dose distribution in a complete mathematical model of the human eye containing a choroid melanoma; considering the eye actual dimensions and its various component structures, due to an ophthalmic brachytherapy treatment, using 106Ru/106Rh beta-ray sources. Two possibilities were analyzed; a simple water eye and a heterogeneous eye considering all its structures. Two concave applicators, CCA and CCB manufactured by BEBIG and a complete mathematical model of the human eye were modeled using the MCNPX code. Results and Conclusion: For both eye models, namely water model and heterogeneous model, mean dose values simulated for the same eye regions are, in general, very similar, excepting for regions very distant from the applicator, where mean dose values are very low, uncertainties are higher and relative differences may reach 20.4%. For the tumor base and the eye structures closest to the applicator, such as sclera, choroid and retina, the maximum difference observed was 4%, presenting the heterogeneous model higher mean dose values. For the other eye regions, the higher doses were obtained when the homogeneous water eye model is taken into consideration. Mean dose distributions determined for the homogeneous water eye model are similar to those obtained for the

  2. HERMES: a Monte Carlo Code for the Propagation of Ultra-High Energy Nuclei

    CERN Document Server

    De Domenico, Manlio; Settimo, Mariangela

    2013-01-01

    Although the recent experimental efforts to improve the observation of Ultra-High Energy Cosmic Rays (UHECRs) above $10^{18}$ eV, the origin and the composition of such particles is still unknown. In this work, we present the novel Monte Carlo code (HERMES) simulating the propagation of UHE nuclei, in the energy range between $10^{16}$ and $10^{22}$ eV, accounting for propagation in the intervening extragalactic and Galactic magnetic fields and nuclear interactions with relic photons of the extragalactic background radiation. In order to show the potential applications of HERMES for astroparticle studies, we estimate the expected flux of UHE nuclei in different astrophysical scenarios, the GZK horizons and we show the expected arrival direction distributions in the presence of turbulent extragalactic magnetic fields. A stable version of HERMES will be released in the next future for public use together with libraries of already propagated nuclei to allow the community to perform mass composition and energy sp...

  3. Design and Simulation of Photoneutron Source by MCNPX Monte Carlo Code for Boron Neutron Capture Therapy

    Directory of Open Access Journals (Sweden)

    Mona Zolfaghari

    2015-07-01

    Full Text Available Introduction Electron linear accelerator (LINAC can be used for neutron production in Boron Neutron Capture Therapy (BNCT. BNCT is an external radiotherapeutic method for the treatment of some cancers. In this study, Varian 2300 C/D LINAC was simulated as an electron accelerator-based photoneutron source to provide a suitable neutron flux for BNCT. Materials and Methods Photoneutron sources were simulated, using MCNPX Monte Carlo code. In this study, a 20 MeV LINAC was utilized for electron-photon reactions. After the evaluation of cross-sections and threshold energies, lead (Pb, uranium (U and beryllium deuteride (BeD2were selected as photoneutron sources. Results According to the simulation results, optimized photoneutron sources with a compact volume and photoneutron yields of 107, 108 and 109 (n.cm-2.s-1 were obtained for Pb, U and BeD2 composites. Also, photoneutrons increased by using enriched U (10-60% as an electron accelerator-based photoneutron source. Conclusion Optimized photoneutron sources were obtained with compact sizes of 107, 108 and 109 (n.cm-2.s-1, respectively. These fluxs can be applied for BNCT by decelerating fast neutrons and using a suitable beam-shaping assembly, surrounding electron-photon and photoneutron sources.

  4. Thyroid cell irradiation by radioiodines: a new Monte Carlo electron track-structure code

    Energy Technology Data Exchange (ETDEWEB)

    Champion, Christophe [Universite Paul Verlaine-Metz (France). Lab. de Physique Moleculaire et des Collisions]. E-mail: champion@univ-metz.fr; Elbast, Mouhamad; Colas-Linhart, Nicole [Universite Paris 7 (France). Faculte de Medecine. Lab. de Biophysique; Ting-Di Wu [INSERM U759, Orsay (France). Institut Curie Recherche. Imagerie Integrative

    2007-09-15

    The most significant impact of the Chernobyl accident is the increased incidence of thyroid cancer among children who were exposed to short-lived radioiodines and 131-iodine. In order to accurately estimate the radiation dose provided by these radioiodines, it is necessary to know where iodine is incorporated. To do that, the distribution at the cellular level of newly organified iodine in the immature rat thyroid was performed using secondary ion mass microscopy (NanoSIMS{sup 50}). Actual dosimetric models take only into account the averaged energy and range of beta particles of the radio-elements and may, therefore, imperfectly describe the real distribution of dose deposit at the microscopic level around the point sources. Our approach is radically different since based on a track-structure Monte Carlo code allowing following-up of electrons down to low energies ({approx}= 10 eV) what permits a nanometric description of the irradiation physics. The numerical simulations were then performed by modelling the complete disintegrations of the short-lived iodine isotopes as well as of {sup 131}I in new born rat thyroids in order to take into account accurate histological and biological data for the thyroid gland. (author)

  5. Deep-penetration calculation for the ISIS target station shielding using the MARS Monte Carlo code

    CERN Document Server

    Nunomiya, T; Nakamura, T; Nakao, N

    2002-01-01

    A calculation of neutron penetration through a thick shield was performed with a three-dimensional multi-layer technique using the MARS14(02) Monte Carlo code to compare with the experimental shielding data in 1998 at the ISIS spallation neutron source facility. In this calculation, secondary particles from a tantalum target bombarded by 800-MeV protons were transmitted through a bulk shield of approximately 3-m-thick iron and 1-m-thick concrete. To accomplish this deep-penetration calculation with good statistics, the following three techniques were used in this study. First, the geometry of the bulk shield was three-dimensionally divided into several layers of about 50-cm thickness, and a step-by-step calculation was carried out to multiply the number of penetrated particles at the boundaries between the layers. Second, the source particles in the layers were divided into two parts to maintain the statistical balance on the spatial-flux distribution. Third, only high-energy particles above 20 MeV were trans...

  6. Modeling Monte Carlo of multileaf collimators using the code GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)

  7. Characterizing Video Coding Computing in Conference Systems

    NARCIS (Netherlands)

    Tuquerres, G.

    2000-01-01

    In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th

  8. The specific purpose Monte Carlo code McENL for simulating the response of epithermal neutron lifetime well logging tools

    Energy Technology Data Exchange (ETDEWEB)

    Prettyman, T.H.; Gardner, R.P.; Verghese, K. (North Carolina State Univ., Raleigh, NC (United States). Center for Engineering Applications and Radioisotopes)

    1993-08-01

    A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The code was developed so that the Monte Carlo neophyte can easily use it. A minimum amount of input preparation is required and specified fixed values of the parameters used to control the code operation can be used. The weight windows technique, employing splitting and Russian Roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given here for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity (PNP) tool and found to be very accurate. Results of the experimental validation and details of code performance are presented.

  9. Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

    Energy Technology Data Exchange (ETDEWEB)

    Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.

    2002-09-11

    The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.

  10. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

  11. Applications of quantum Monte Carlo methods in condensed systems

    CERN Document Server

    Kolorenc, Jindrich

    2010-01-01

    The quantum Monte Carlo methods represent a powerful and broadly applicable computational tool for finding very accurate solutions of the stationary Schroedinger equation for atoms, molecules, solids and a variety of model systems. The algorithms are intrinsically parallel and are able to take full advantage of the present-day high-performance computing systems. This review article concentrates on the fixed-node/fixed-phase diffusion Monte Carlo method with emphasis on its applications to electronic structure of solids and other extended many-particle systems.

  12. Analysis of the dead layer of a detector of germanium with code ultrapure Monte Carlo SWORD-GEANT; Analisis del dead layer de un detector de germanio ultrapuro con el codigo de Monte Carlo SWORDS-GEANT

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo, S.; Querol, A.; Ortiz, J.; Rodenas, J.; Verdu, G.

    2014-07-01

    In this paper the use of Monte Carlo code SWORD-GEANT is proposed to simulate an ultra pure germanium detector High Purity Germanium detector (HPGe) detector ORTEC specifically GMX40P4, coaxial geometry. (Author)

  13. EleCa: A Monte Carlo code for the propagation of extragalactic photons at ultra-high energy

    Energy Technology Data Exchange (ETDEWEB)

    Settimo, Mariangela [University of Siegen (Germany); De Domenico, Manlio [Laboratory of Complex Systems, Scuola Superiore di Catania and INFN (Italy); Lyberis, Haris [Federal University of Rio de Janeiro (Brazil)

    2013-06-15

    Ultra high energy photons, above 10{sup 17}–10{sup 18}eV, can interact with the extragalactic background radiation leading to the development of electromagnetic cascades. A Monte Carlo code to simulate the electromagnetic cascades initiated by high-energy photons and electrons is presented. Results from simulations and their impact on the predicted flux at Earth are discussed in different astrophysical scenarios.

  14. Meaningful timescales from Monte Carlo simulations of molecular systems

    CERN Document Server

    Costa, Liborio I

    2016-01-01

    A new Markov Chain Monte Carlo method for simulating the dynamics of molecular systems with atomistic detail is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.

  15. Efficiency of Monte Carlo sampling in chaotic systems.

    Science.gov (United States)

    Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G

    2014-11-01

    In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.

  16. Update on the Status of the FLUKA Monte Carlo Transport Code

    Science.gov (United States)

    Pinsky, L.; Anderson, V.; Empl, A.; Lee, K.; Smirnov, G.; Zapp, N; Ferrari, A.; Tsoulou, K.; Roesler, S.; Vlachoudis, V.; Battisoni, G.; Ceruti, F.; Gadioli, M. V.; Garzelli, M.; Muraro, S.; Rancati, T.; Sala, P.; Ballarini, R.; Ottolenghi, A.; Parini, V.; Scannicchio, D.; Pelliccioni, M.; Wilson, T. L.

    2004-01-01

    The FLUKA Monte Carlo transport code is a well-known simulation tool in High Energy Physics. FLUKA is a dynamic tool in the sense that it is being continually updated and improved by the authors. Here we review the progresses achieved in the last year on the physics models. From the point of view of hadronic physics, most of the effort is still in the field of nucleus--nucleus interactions. The currently available version of FLUKA already includes the internal capability to simulate inelastic nuclear interactions beginning with lab kinetic energies of 100 MeV/A up the the highest accessible energies by means of the DPMJET-II.5 event generator to handle the interactions for greater than 5 GeV/A and rQMD for energies below that. The new developments concern, at high energy, the embedding of the DPMJET-III generator, which represent a major change with respect to the DPMJET-II structure. This will also allow to achieve a better consistency between the nucleus-nucleus section with the original FLUKA model for hadron-nucleus collisions. Work is also in progress to implement a third event generator model based on the Master Boltzmann Equation approach, in order to extend the energy capability from 100 MeV/A down to the threshold for these reactions. In addition to these extended physics capabilities, structural changes to the programs input and scoring capabilities are continually being upgraded. In particular we want to mention the upgrades in the geometry packages, now capable of reaching higher levels of abstraction. Work is also proceeding to provide direct import into ROOT of the FLUKA output files for analysis and to deploy a user-friendly GUI input interface.

  17. Uncertainty analysis in the simulation of an HPGe detector using the Monte Carlo Code MCNP5

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo, Sergio; Pozuelo, Fausto; Querol, Andrea; Verdu, Gumersindo; Rodenas, Jose, E-mail: sergalbe@upv.es [Universitat Politecnica de Valencia, Valencia, (Spain). Instituto de Seguridad Industrial, Radiofisica y Medioambiental (ISIRYM); Ortiz, J. [Universitat Politecnica de Valencia, Valencia, (Spain). Servicio de Radiaciones. Lab. de Radiactividad Ambiental; Pereira, Claubia [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2013-07-01

    A gamma spectrometer including an HPGe detector is commonly used for environmental radioactivity measurements. Many works have been focused on the simulation of the HPGe detector using Monte Carlo codes such as MCNP5. However, the simulation of this kind of detectors presents important difficulties due to the lack of information from manufacturers and due to loss of intrinsic properties in aging detectors. Some parameters such as the active volume or the Ge dead layer thickness are many times unknown and are estimated during simulations. In this work, a detailed model of an HPGe detector and a petri dish containing a certified gamma source has been done. The certified gamma source contains nuclides to cover the energy range between 50 and 1800 keV. As a result of the simulation, the Pulse Height Distribution (PHD) is obtained and the efficiency curve can be calculated from net peak areas and taking into account the certified activity of the source. In order to avoid errors due to the net area calculation, the simulated PHD is treated using the GammaVision software. On the other hand, it is proposed to use the Noether-Wilks formula to do an uncertainty analysis of model with the main goal of determining the efficiency curve of this detector and its associated uncertainty. The uncertainty analysis has been focused on dead layer thickness at different positions of the crystal. Results confirm the important role of the dead layer thickness in the low energy range of the efficiency curve. In the high energy range (from 300 to 1800 keV) the main contribution to the absolute uncertainty is due to variations in the active volume. (author)

  18. Validation of a personalized dosimetric evaluation tool (Oedipe) for targeted radiotherapy based on the Monte Carlo MCNPX code.

    Science.gov (United States)

    Chiavassa, S; Aubineau-Lanièce, I; Bitar, A; Lisbona, A; Barbet, J; Franck, D; Jourdain, J R; Bardiès, M

    2006-02-07

    Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.

  19. Determining MTF of digital detector system with Monte Carlo simulation

    Science.gov (United States)

    Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee

    2005-04-01

    We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.

  20. Multi-microcomputer system for Monte-Carlo calculations

    CERN Document Server

    Berg, B; Krasemann, H

    1981-01-01

    The authors propose a microcomputer system that allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 6800 microprocessor. One attraction of this processor is that it allows up to 16 M Byte random access memory.

  1. Monte Carlo analysis of the accelerator-driven system at Kyoto University Research Reactor Institute

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Won Kyeong; Lee, Deok Jung [Nuclear Engineering Division, Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); Lee, Hyun Chul [VHTR Technology Development Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Pyeon, Cheol Ho [Nuclear Engineering Science Division, Kyoto University Research Reactor Institute, Osaka (Japan); Shin, Ho Cheol [Core and Fuel Analysis Group, Korea Hydro and Nuclear Power Central Research Institute, Daejeon (Korea, Republic of)

    2016-04-15

    An accelerator-driven system consists of a subcritical reactor and a controllable external neutron source. The reactor in an accelerator-driven system can sustain fission reactions in a subcritical state using an external neutron source, which is an intrinsic safety feature of the system. The system can provide efficient transmutations of nuclear wastes such as minor actinides and long-lived fission products and generate electricity. Recently at Kyoto University Research Reactor Institute (KURRI; Kyoto, Japan), a series of reactor physics experiments was conducted with the Kyoto University Critical Assembly and a Cockcroft-Walton type accelerator, which generates the external neutron source by deuterium-tritium reactions. In this paper, neutronic analyses of a series of experiments have been re-estimated by using the latest Monte Carlo code and nuclear data libraries. This feasibility study is presented through the comparison of Monte Carlo simulation results with measurements.

  2. Monte Carlo Analysis of the Accelerator-Driven System at Kyoto University Research Reactor Institute

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2016-04-01

    Full Text Available An accelerator-driven system consists of a subcritical reactor and a controllable external neutron source. The reactor in an accelerator-driven system can sustain fission reactions in a subcritical state using an external neutron source, which is an intrinsic safety feature of the system. The system can provide efficient transmutations of nuclear wastes such as minor actinides and long-lived fission products and generate electricity. Recently at Kyoto University Research Reactor Institute (KURRI; Kyoto, Japan, a series of reactor physics experiments was conducted with the Kyoto University Critical Assembly and a Cockcroft–Walton type accelerator, which generates the external neutron source by deuterium–tritium reactions. In this paper, neutronic analyses of a series of experiments have been re-estimated by using the latest Monte Carlo code and nuclear data libraries. This feasibility study is presented through the comparison of Monte Carlo simulation results with measurements.

  3. Monte Carlo Simulation of three dimensional Edwards Anderson model with multi-spin coding and parallel tempering using MPI and CUDA

    Science.gov (United States)

    Feng, Sheng; Fang, Ye; Tam, Ka-Ming; Thakur, Bhupender; Yun, Zhifeng; Tomko, Karen; Moreno, Juana; Ramanujam, Jagannathan; Jarrell, Mark

    2013-03-01

    The Edwards Anderson model is a typical example of random frustrated system. It has been a long standing problem in computational physics due to its long relaxation time. Some important properties of the low temperature spin glass phase are still poorly understood after decades of study. The recent advances of GPU computing provide a new opportunity to substantially improve the simulations. We developed an MPI-CUDA hybrid code with multi-spin coding for parallel tempering Monte Carlo simulation of Edwards Anderson model. Since the system size is relatively small, and a large number of parallel replicas and Monte Carlo moves are required, the problem suits well for modern GPUs with CUDA architecture. We use the code to perform an extensive simulation on the three-dimensional Edwards Anderson model with an external field. This work is funded by the NSF EPSCoR LA-SiGMA project under award number EPS-1003897. This work is partly done on the machines of Ohio Supercomputer Center.

  4. Applicability of Quasi-Monte Carlo for lattice systems

    CERN Document Server

    Ammon, Andreas; Jansen, Karl; Leovey, Hernan; Griewank, Andreas; Müller-Preussker, Micheal

    2013-01-01

    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like $N^{-1/2}$, where $N$ is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to $N^{-1}$, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.

  5. Implementation of Monte Carlo Simulations for the Gamma Knife System

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, W [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Huang, D [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Lee, L [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Feng, J [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Morris, K [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Calugaru, E [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Burman, C [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Li, J [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States); Ma, C-M [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States)

    2007-06-15

    Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 {+-} 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.

  6. Thyroid cell irradiation by radioiodines: a new Monte Carlo electron track-structure code

    Directory of Open Access Journals (Sweden)

    Christophe Champion

    2007-09-01

    Full Text Available The most significant impact of the Chernobyl accident is the increased incidence of thyroid cancer among children who were exposed to short-lived radioiodines and 131-iodine. In order to accurately estimate the radiation dose provided by these radioiodines, it is necessary to know where iodine is incorporated. To do that, the distribution at the cellular level of newly organified iodine in the immature rat thyroid was performed using secondary ion mass microscopy (NanoSIMS50. Actual dosimetric models take only into account the averaged energy and range of beta particles of the radio-elements and may, therefore, imperfectly describe the real distribution of dose deposit at the microscopic level around the point sources. Our approach is radically different since based on a track-structure Monte Carlo code allowing following-up of electrons down to low energies (~ 10eV what permits a nanometric description of the irradiation physics. The numerical simulations were then performed by modelling the complete disintegrations of the short-lived iodine isotopes as well as of 131I in new born rat thyroids in order to take into account accurate histological and biological data for the thyroid gland.O impacto mais significante do acidente de Chernobyl é o crescimento da incidência de câncer de tireóide em crianças que foram expostas a radioiodos de vida curta e ao Iodo-131. Na estimativa precisa da dose de radiação fornecida por esses radioiodos, é necessário conhecer onde o iodo está incorporado. Para obtermos esse resultado, a distribuição em nível celular de iodo recentemente organificado na tireóde de ratos imaturos foi realizada usando microscopia de massa iônica secundária (NanoSIMS50. Modelos dosimétricos atuais consideram apenas a energia média das partículas beta dos radioelementos e pode, imperfeitamente descrever a distribuição real de dose ao nível microscópico em torno dos pontos pesquisados. Nossa abordagem

  7. Monte Carlo simulation of a multi-leaf collimator design for telecobalt machine using BEAMnrc code

    Directory of Open Access Journals (Sweden)

    Ayyangar Komanduri

    2010-01-01

    Full Text Available This investigation aims to design a practical multi-leaf collimator (MLC system for the cobalt teletherapy machine and check its radiation properties using the Monte Carlo (MC method. The cobalt machine was modeled using the BEAMnrc Omega-Beam MC system, which could be freely downloaded from the website of the National Research Council (NRC, Canada. Comparison with standard depth dose data tables and the theoretically modeled beam showed good agreement within 2%. An MLC design with low melting point alloy (LMPA was tested for leakage properties of leaves. The LMPA leaves with a width of 7 mm and height of 6 cm, with tongue and groove of size 2 mm wide by 4 cm height, produced only 4% extra leakage compared to 10 cm height tungsten leaves. With finite 60 Co source size, the interleaf leakage was insignificant. This analysis helped to design a prototype MLC as an accessory mount on a cobalt machine. The complete details of the simulation process and analysis of results are discussed.

  8. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    CERN Document Server

    De Geyter, Gert; Fritz, Jacopo; Camps, Peter

    2012-01-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different...

  9. Effective quantum Monte Carlo algorithm for modeling strongly correlated systems

    NARCIS (Netherlands)

    Kashurnikov, V. A.; Krasavin, A. V.

    2007-01-01

    A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description

  10. On the Performance of Synchronous DS—CDMA Systems with Generalized Orthogonal Spreading Codes

    Institute of Scientific and Technical Information of China (English)

    HAOLi; FANPingzhi

    2003-01-01

    A new synchronous DS-CDMA system em-ploying generalized orthogonal (GO) spreading codes and maximum ratio combining (MRC) scheme is presented in this paper. In particular, the forward link of the system is discussed in detail. The GO codes are used to combat the interference caused by multipath components. The aver-age correlation properties of GO codes are evaluated andthe signal interference ratio (SIR) expressions based on the Rayleigh and Racian fading multipath channel models are derived respectively. The link performance in terms of bit error rate (BER) is obtained for GO codes with different orthogonal zones by Gaussian Approximation and Monte-Carlo simulation respectively. The results reveal that the GO codes appear better BER performance than traditional orthogonal codes in synchronous CDMA systems, and the GO code with larger orthogonal zone exhibits larger per-formance gain.

  11. Calculation of electron and isotopes dose point kernels with FLUKA Monte Carlo code for dosimetry in nuclear medicine therapy

    CERN Document Server

    Mairani, A; Valente, M; Battistoni, G; Botta, F; Pedroli, G; Ferrari, A; Cremonesi, M; Di Dia, A; Ferrari, M; Fasso, A

    2011-01-01

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy ((89)Sr, (90)Y, (131)I, (153)Sm, (177)Lu, (186)Re, and (188)Re). Point isotropic...

  12. Nexus: A modular workflow management system for quantum simulation codes

    Science.gov (United States)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  13. Burnup simulations of different fuel grades using the MCNPX Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Asah-Opoku Fiifi

    2014-01-01

    Full Text Available Global energy problems range from the increasing cost of fuel to the unequal distribution of energy resources and the potential climate change resulting from the burning of fossil fuels. A sustainable nuclear energy would augment the current world energy supply and serve as a reliable future energy source. This research focuses on Monte Carlo simulations of pressurized water reactor systems. Three different fuel grades - mixed oxide fuel (MOX, uranium oxide fuel (UOX, and commercially enriched uranium or uranium metal (CEU - are used in this simulation and their impact on the effective multiplication factor (Keff and, hence, criticality and total radioactivity of the reactor core after fuel burnup analyzed. The effect of different clad materials on Keff is also studied. Burnup calculation results indicate a buildup of plutonium isotopes in UOX and CEU, as opposed to a decline in plutonium radioisotopes for MOX fuel burnup time. For MOX fuel, a decrease of 31.9% of the fissile plutonium isotope is observed, while for UOX and CEU, fissile plutonium isotopes increased by 82.3% and 83.8%, respectively. Keff results show zircaloy as a much more effective clad material in comparison to zirconium and stainless steel.

  14. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiology, Osaka University Hospital, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, St. Jude Children’s Research Hospital, Memphis, TN 38105 (United States)

    2016-01-15

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm{sup 3}, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm{sup 3} voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation

  15. Application of a Monte Carlo Penelope code at diverse dosimetric problems in radiotherapy; Aplicacion del codigo Monte Carlo Penelope a diversos problemas dosimetricos en radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, R.A.; Fernandez V, J.M.; Salvat, F. [Servicio de Oncologia Radioterapica. Hospital Clinico de Barcelona. Villarroel 170 08036 Barcelona (Spain)

    1998-12-31

    In the present communication it is presented the results of the simulation utilizing the Penelope code (Penetration and Energy loss of Positrons and Electrons) in several applications of radiotherapy which can be the radioactive sources simulation: {sup 192} Ir, {sup 125} I, {sup 106} Ru or the electron beams simulation of a linear accelerator Siemens KDS. The simulations presented in this communication have been on computers of type Pentium PC of 100 throughout 300 MHz, and the times of execution were from some hours until several days depending of the complexity of the problem. It is concluded that Penelope is a very useful tool for the Monte Carlo calculations due to its great ability and its relative handling facilities. (Author)

  16. System Level Numerical Analysis of a Monte Carlo Simulation of the E. Coli Chemotaxis

    CERN Document Server

    Siettos, Constantinos I

    2010-01-01

    Over the past few years it has been demonstrated that "coarse timesteppers" establish a link between traditional numerical analysis and microscopic/ stochastic simulation. The underlying assumption of the associated lift-run-restrict-estimate procedure is that macroscopic models exist and close in terms of a few governing moments of microscopically evolving distributions, but they are unavailable in closed form. This leads to a system identification based computational approach that sidesteps the necessity of deriving explicit closures. Two-level codes are constructed; the outer code performs macroscopic, continuum level numerical tasks, while the inner code estimates -through appropriately initialized bursts of microscopic simulation- the quantities required for continuum numerics. Such quantities include residuals, time derivatives, and the action of coarse slow Jacobians. We demonstrate how these coarse timesteppers can be applied to perform equation-free computations of a kinetic Monte Carlo simulation of...

  17. Kinetic Monte Carlo simulation of dopant-defect systems under submicrosecond laser thermal processes

    Energy Technology Data Exchange (ETDEWEB)

    Fisicaro, G.; Pelaz, Lourdes; Lopez, P.; Italia, M.; Huet, K.; Venturini, J.; La Magna, A. [CNR IMM, Z.I. VIII Strada 5, I -95121 Catania (Italy); Department of Electronics, University of Valladolid, 47011 Valladolid (Spain); CNR IMM, Z.I. VIII Strada 5, I -95121 Catania (Italy); Excico 13-21 Quai des Gresillons, 92230 Gennevilliers (France); CNR IMM, Z.I. VIII Strada 5, I -95121 Catania (Italy)

    2012-11-06

    An innovative Kinetic Monte Carlo (KMC) code has been developed, which rules the post-implant kinetics of the defects system in the extremely far-from-the equilibrium conditions caused by the laser irradiation close to the liquid-solid interface. It considers defect diffusion, annihilation and clustering. The code properly implements, consistently to the stochastic formalism, the fast varying local event rates related to the thermal field T(r,t) evolution. This feature of our numerical method represents an important advancement with respect to current state of the art KMC codes. The reduction of the implantation damage and its reorganization in defect aggregates are studied as a function of the process conditions. Phosphorus activation efficiency, experimentally determined in similar conditions, has been related to the emerging damage scenario.

  18. Monte Carlo simulations for thermodynamical properties calculations of plasmas at thermodynamical equilibrium. Applications to opacity and equation of state calculations; Apport d'un code de simulation Monte Carlo pour l'etude des proprietes thermodynamiques d'un plasma a l'equilibre et application au calcul de l'elargissement des profils de raies ioniques emises dans les plasmas denses, aux opacites spectrales et aux equations d'etat de systemes fluides

    Energy Technology Data Exchange (ETDEWEB)

    Gilles, D

    2005-07-01

    This report is devoted to illustrate the power of a Monte Carlo (MC) simulation code to study the thermodynamical properties of a plasma, composed of classical point particles at thermodynamical equilibrium. Such simulations can help us to manage successfully the challenge of taking into account 'exactly' all classical correlations between particles due to density effects, unlike analytical or semi-analytical approaches, often restricted to low dense plasmas. MC simulations results allow to cover, for laser or astrophysical applications, a wide range of thermodynamical conditions from more dense (and correlated) to less dense ones (where potentials are long ranged type). Therefore Yukawa potentials, with a Thomas-Fermi temperature- and density-dependent screening length, are used to describe the effective ion-ion potentials. In this report we present two MC codes ('PDE' and 'PUCE') and applications performed with these codes in different fields (spectroscopy, opacity, equation of state). Some examples of them are discussed and illustrated at the end of the report. (author)

  19. A new Monte Carlo code for simulation of the effect of irregular surfaces on X-ray spectra

    Energy Technology Data Exchange (ETDEWEB)

    Brunetti, Antonio, E-mail: brunetti@uniss.it; Golosio, Bruno

    2014-04-01

    Generally, quantitative X-ray fluorescence (XRF) analysis estimates the content of chemical elements in a sample based on the areas of the fluorescence peaks in the energy spectrum. Besides the concentration of the elements, the peak areas depend also on the geometrical conditions. In fact, the estimate of the peak areas is simple if the sample surface is smooth and if the spectrum shows a good statistic (large-area peaks). For this reason often the sample is prepared as a pellet. However, this approach is not always feasible, for instance when cultural heritage or valuable samples must be analyzed. In this case, the sample surface cannot be smoothed. In order to address this problem, several works have been reported in the literature, based on experimental measurements on a few sets of specific samples or on Monte Carlo simulations. The results obtained with the first approach are limited by the specific class of samples analyzed, while the second approach cannot be applied to arbitrarily irregular surfaces. The present work describes a more general analysis tool based on a new fast Monte Carlo algorithm, which is virtually able to simulate any kind of surface. At the best of our knowledge, it is the first Monte Carlo code with this option. A study of the influence of surface irregularities on the measured spectrum is performed and some results reported. - Highlights: • We present a fast Monte Carlo code with the possibility to simulate any irregularly rough surfaces. • We show applications to multilayer measurements. • Real time simulations are available.

  20. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  1. Benchmarking of the dose planning method (DPM) Monte Carlo code using electron beams from a racetrack microtron.

    Science.gov (United States)

    Chetty, Indrin J; Moran, Jean M; McShan, Daniel L; Fraass, Benedick A; Wilderman, Scott J; Bielajew, Alex F

    2002-06-01

    A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +/- 2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations.

  2. A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code

    Science.gov (United States)

    Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.

    2013-12-01

    The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness, , and standard deviation of COT, were 3.0 and 4.3 for pristine case and 8.5 and 7.4 for polluted case, respectively. In the MIDPM method, we first construct a library of pair of observed vertical profiles from active sensors and collocated imager products at the nadir footprint, i.e. spectral imager radiances, cloud optical thickness (COT), effective particle radius (RE) and cloud top temperature (Tc). We then select a best

  3. Improved decoding for a concatenated coding system

    OpenAIRE

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, onl...

  4. A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes

    Science.gov (United States)

    Schnittman, Jeremy David; Krolik, Julian H.

    2013-01-01

    We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.

  5. Fixed-Node Diffusion Monte Carlo of Lithium Systems

    CERN Document Server

    Rasch, Kevin

    2015-01-01

    We study lithium systems over a range of number of atoms, e.g., atomic anion, dimer, metallic cluster, and body-centered cubic crystal by the diffusion Monte Carlo method. The calculations include both core and valence electrons in order to avoid any possible impact by pseudo potentials. The focus of the study is the fixed-node errors, and for that purpose we test several orbital sets in order to provide the most accurate nodal hyper surfaces. We compare our results to other high accuracy calculations wherever available and to experimental results so as to quantify the the fixed-node errors. The results for these Li systems show that fixed-node quantum Monte Carlo achieves remarkably high accuracy total energies and recovers 97-99 % of the correlation energy.

  6. Accuracy and convergence of coupled finite-volume/Monte Carlo codes for plasma edge simulations of nuclear fusion reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ghoos, K., E-mail: kristel.ghoos@kuleuven.be [KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300A, 3001 Leuven (Belgium); Dekeyser, W. [KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300A, 3001 Leuven (Belgium); Samaey, G. [KU Leuven, Department of Computer Science, Celestijnenlaan 200A, 3001 Leuven (Belgium); Börner, P. [Institute of Energy and Climate Research (IEK-4), FZ Jülich GmbH, D-52425 Jülich (Germany); Baelmans, M. [KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300A, 3001 Leuven (Belgium)

    2016-10-01

    The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracy by making use of averaging in the Random Noise coupling technique.

  7. Generation of discrete scattering cross sections and demonstration of Monte Carlo charged particle transport in the Milagro IMC code package

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)

    2013-07-01

    A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)

  8. Development and validation of MCNPX-based Monte Carlo treatment plan verification system.

    Science.gov (United States)

    Jabbari, Iraj; Monadi, Shahram

    2015-01-01

    A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan.

  9. Development and validation of MCNPX-based Monte Carlo treatment plan verification system

    Directory of Open Access Journals (Sweden)

    Iraj Jabbari

    2015-01-01

    Full Text Available A Monte Carlo treatment plan verification (MCTPV system was developed for clinical treatment plan verification (TPV, especially for the conformal and intensity-modulated radiotherapy (IMRT plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D diode array (MapCHECK2 and gamma index analysis were used. The gamma passing rate (3%/3 mm of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%. The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan.

  10. Arabic Natural Language Processing System Code Library

    Science.gov (United States)

    2014-06-01

    Adelphi, MD 20783-1197 This technical note provides a brief description of a Java library for Arabic natural language processing ( NLP ) containing code...for training and applying the Arabic NLP system described in the paper "A Cross-Task Flexible Transition Model for Arabic Tokenization, Affix...processing, NLP , Java, code 14 Stephen C. Tratz (301) 394-2305Unclassified Unclassified Unclassified UU ii Contents 1. Introduction 1 2. File Overview 1 3

  11. Microdosimetry of alpha particles for simple and 3D voxelised geometries using MCNPX and Geant4 Monte Carlo codes.

    Science.gov (United States)

    Elbast, M; Saudo, A; Franck, D; Petitot, F; Desbrée, A

    2012-07-01

    Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f(1)(z) and the mean-specific energy were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found is MCNPX for calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions.

  12. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes.

    Science.gov (United States)

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-08-21

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.

  13. MC3: Multi-core Markov-chain Monte Carlo code

    Science.gov (United States)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  14. Verifying compiled file system code

    OpenAIRE

    Mühlberg, Jan Tobias; Lüttgen, Gerald

    2011-01-01

    This article presents a case study on retrospective verification of the Linux Virtual File System (VFS), which is aimed at checking violations of API usage rules and memory properties. Since VFS maintains dynamic data structures and is written in a mixture of C and inlined assembly, modern software model checkers cannot be applied. Our case study centres around our novel automated software verification tool, the SOCA Verifier, which symbolically executes and analyses compi...

  15. Monte Carlo Simulation of Siemens ONCOR Linear Accelerator with BEAMnrc and DOSXYZnrc Code.

    Science.gov (United States)

    Jabbari, Keyvan; Anvar, Hossein Saberi; Tavakoli, Mohammad Bagher; Amouheidari, Alireza

    2013-07-01

    The Monte Carlo method is the most accurate method for simulation of radiation therapy equipment. The linear accelerators (linac) are currently the most widely used machines in radiation therapy centers. In this work, a Monte Carlo modeling of the Siemens ONCOR linear accelerator in 6 MV and 18 MV beams was performed. The results of simulation were validated by measurements in water by ionization chamber and extended dose range (EDR2) film in solid water. The linac's X-ray particular are so sensitive to the properties of primary electron beam. Square field size of 10 cm × 10 cm produced by the jaws was compared with ionization chamber and film measurements. Head simulation was performed with BEAMnrc and dose calculation with DOSXYZnrc for film measurements and 3ddose file produced by DOSXYZnrc analyzed used homemade MATLAB program. At 6 MV, the agreement between dose calculated by Monte Carlo modeling and direct measurement was obtained to the least restrictive of 1%, even in the build-up region. At 18 MV, the agreement was obtained 1%, except for in the build-up region. In the build-up region, the difference was 1% at 6 MV and 2% at 18 MV. The mean difference between measurements and Monte Carlo simulation is very small in both of ONCOR X-ray energy. The results are highly accurate and can be used for many applications such as patient dose calculation in treatment planning and in studies that model this linac with small field size like intensity-modulated radiation therapy technique.

  16. Monte Carlo Simulation for the MAGIC-II System

    CERN Document Server

    Carmona, E; Moralejo, A; Vitale, V; Sobczynska, D; Haffke, M; Bigongiari, C; Otte, N; Cabras, G; De Maria, M; De Sabata, F

    2007-01-01

    Within the year 2007, MAGIC will be upgraded to a two telescope system at La Palma. Its main goal is to improve the sensitivity in the stereoscopic/coincident operational mode. At the same time it will lower the analysis threshold of the currently running single MAGIC telescope. Results from the Monte Carlo simulations of this system will be discussed. A comparison of the two telescope system with the performance of one single telescope will be shown in terms of sensitivity, angular resolution and energy resolution.

  17. Evaluation of PENFAST - A fast Monte Carlo code for dose calculations in photon and electron radiotherapy treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Habib, B.; Poumarede, B.; Tola, F.; Barthe, J. [CEA, LIST, Dept Technol Capteur et Signal, F-91191 Gif Sur Yvette, (France)

    2010-07-01

    The aim of the present study is to demonstrate the potential of accelerated dose calculations, using the fast Monte Carlo (MC) code referred to as PENFAST, rather than the conventional MC code PENELOPE, without losing accuracy in the computed dose. For this purpose, experimental measurements of dose distributions in homogeneous and inhomogeneous phantoms were compared with simulated results using both PENELOPE and PENFAST. The simulations and experiments were performed using a Saturne 43 linac operated at 12 MV (photons), and at 18 MeV (electrons). Pre-calculated phase space files (PSFs) were used as input data to both the PENELOPE and PENFAST dose simulations. Since depth-dose and dose profile comparisons between simulations and measurements in water were found to be in good agreement (within {+-} 1% to 1 mm), the PSF calculation is considered to have been validated. In addition, measured dose distributions were compared to simulated results in a set of clinically relevant, inhomogeneous phantoms, consisting of lung and bone heterogeneities in a water tank. In general, the PENFAST results agree to within a 1% to 1 mm difference with those produced by PENELOPE, and to within a 2% to 2 mm difference with measured values. Our study thus provides a pre-clinical validation of the PENFAST code. It also demonstrates that PENFAST provides accurate results for both photon and electron beams, equivalent to those obtained with PENELOPE. CPU time comparisons between both MC codes show that PENFAST is generally about 9-21 times faster than PENELOPE. (authors)

  18. The Monte Carlo code CSSE for the simulation of realistic thermal neutron sensor devices for Humanitarian Demining

    Energy Technology Data Exchange (ETDEWEB)

    Palomba, M. E-mail: maurizio.palomba@ba.infn.it; D' Erasmo, G.; Pantaleo, A

    2003-02-11

    The CSSE code, a GEANT3-based Monte Carlo simulation program, has been developed in the framework of the EXPLODET project (Nucl. Instr. and Meth. A 422 (1999) 918) with the aim to simulate experimental set-ups employed in Thermal Neutron Analysis (TNA) for the landmines detection. Such a simulation code appears to be useful for studying the background in the {gamma}-ray spectra obtained with this technique, especially in the region where one expects to find the explosive signature (the {gamma}-ray peak at 10.83 MeV coming from neutron capture by nitrogen). The main features of the CSSE code are introduced and original innovations emphasized. Among the latter, an algorithm simulating the time correlation between primary particles, according with their time distributions is presented. Such a correlation is not usually achievable within standard GEANT-based codes and allows to reproduce some important phenomena, as the pulse pile-up inside the NaI(Tl) {gamma}-ray detector employed, producing a more realistic detector response simulation. CSSE has been successfully tested by reproducing a real nuclear sensor prototype assembled at the Physics Department of Bari University.

  19. The Monte Carlo code CSSE for the simulation of realistic thermal neutron sensor devices for Humanitarian Demining

    Science.gov (United States)

    Palomba, M.; D'Erasmo, G.; Pantaleo, A.

    2003-02-01

    The CSSE code, a GEANT3-based Monte Carlo simulation program, has been developed in the framework of the EXPLODET project (Nucl. Instr. and Meth. A 422 (1999) 918) with the aim to simulate experimental set-ups employed in Thermal Neutron Analysis (TNA) for the landmines detection. Such a simulation code appears to be useful for studying the background in the γ-ray spectra obtained with this technique, especially in the region where one expects to find the explosive signature (the γ-ray peak at 10.83 MeV coming from neutron capture by nitrogen). The main features of the CSSE code are introduced and original innovations emphasized. Among the latter, an algorithm simulating the time correlation between primary particles, according with their time distributions is presented. Such a correlation is not usually achievable within standard GEANT-based codes and allows to reproduce some important phenomena, as the pulse pile-up inside the NaI(Tl) γ-ray detector employed, producing a more realistic detector response simulation. CSSE has been successfully tested by reproducing a real nuclear sensor prototype assembled at the Physics Department of Bari University.

  20. Subtle Monte Carlo Updates in Dense Molecular Systems

    DEFF Research Database (Denmark)

    Bottaro, Sandro; Boomsma, Wouter; Johansson, Kristoffer E.;

    2012-01-01

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce...... as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results...... a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule...

  1. Proton Dose Assessment to the Human Eye Using Monte Carlo N-Particle Transport Code (MCNPX)

    Science.gov (United States)

    2006-08-01

    objective of this project was to develop a simple MCNPX model of the human eye to approximate dose delivered from proton therapy. The calculated dose...computer code MCNPX that approximates dose delivered during proton therapy. The calculations considered proton interactions and secondary interactions...Volume Calculation The MCNPX code has limited ability to compute the volumes of defined cells. The dosimetric volumes in the outer wall of the eye are

  2. Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes

    CERN Document Server

    Mainardi, E; Donahue, R J

    2002-01-01

    The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions of a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using ...

  3. Selected organ dose conversion coefficients for external photons calculated using ICRP adult voxel phantoms and Monte Carlo code FLUKA.

    Science.gov (United States)

    Patni, H K; Nadar, M Y; Akar, D K; Bhati, S; Sarkar, P K

    2011-11-01

    The adult reference male and female computational voxel phantoms recommended by ICRP are adapted into the Monte Carlo transport code FLUKA. The FLUKA code is then utilised for computation of dose conversion coefficients (DCCs) expressed in absorbed dose per air kerma free-in-air for colon, lungs, stomach wall, breast, gonads, urinary bladder, oesophagus, liver and thyroid due to a broad parallel beam of mono-energetic photons impinging in anterior-posterior and posterior-anterior directions in the energy range of 15 keV-10 MeV. The computed DCCs of colon, lungs, stomach wall and breast are found to be in good agreement with the results published in ICRP publication 110. The present work thus validates the use of FLUKA code in computation of organ DCCs for photons using ICRP adult voxel phantoms. Further, the DCCs for gonads, urinary bladder, oesophagus, liver and thyroid are evaluated and compared with results published in ICRP 74 in the above-mentioned energy range and geometries. Significant differences in DCCs are observed for breast, testis and thyroid above 1 MeV, and for most of the organs at energies below 60 keV in comparison with the results published in ICRP 74. The DCCs of female voxel phantom were found to be higher in comparison with male phantom for almost all organs in both the geometries.

  4. Graphics-System Color-Code Interface

    Science.gov (United States)

    Tulppo, J. S.

    1982-01-01

    Circuit originally developed for a flight simulator interfaces a computer graphics system with color monitor. Subsystem is intended for particular display computer (AGT-130, ADAGE Graphics Terminal) and specific color monitor (beam penetration tube--Penetron). Store-and-transmit channel is one of five in graphics/color-monitor interface. Adding 5-bit color code to existing graphics programs requires minimal programing effort.

  5. Development of parallel monte carlo electron and photon transport (PMCEPT) code III: Applications to medical radiation physics

    Science.gov (United States)

    Kum, Oyeon; Han, Youngyih; Jeong, Hae Sun

    2012-05-01

    Minimizing the differences between dose distributions calculated at the treatment planning stage and those delivered to the patient is an essential requirement for successful radiotheraphy. Accurate calculation of dose distributions in the treatment planning process is important and can be done only by using a Monte Carlo calculation of particle transport. In this paper, we perform a further validation of our previously developed parallel Monte Carlo electron and photon transport (PMCEPT) code [Kum and Lee, J. Korean Phys. Soc. 47, 716 (2005) and Kim and Kum, J. Korean Phys. Soc. 49, 1640 (2006)] for applications to clinical radiation problems. A linear accelerator, Siemens' Primus 6 MV, was modeled and commissioned. A thorough validation includes both small fields, closely related to the intensity modulated radiation treatment (IMRT), and large fields. Two-dimensional comparisons with film measurements were also performed. The PMCEPT results, in general, agreed well with the measured data within a maximum error of about 2%. However, considering the experimental errors, the PMCEPT results can provide the gold standard of dose distributions for radiotherapy. The computing time was also much faster, compared to that needed for experiments, although it is still a bottleneck for direct applications to the daily routine treatment planning procedure.

  6. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results

    Science.gov (United States)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

    1990-01-01

    Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

  7. Evaluation of a 50-MV photon therapy beam from a racetrack microtron using MCNP4B Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Gudowska, I.; Svensson, R. [Karolinska Inst. (Sweden). Dept. of Medical Radiation Physics]|[Huddinge Univ. Hospital, Stockholm (Sweden). Dept. of Medical Physics; Sorcini, B. [Karolinska Inst. (Sweden). Dept. of Medical Radiation Physics]|[Stockholm Univ. (Sweden)

    2001-07-01

    High energy photon therapy beam from the 50 MV racetrack microtron has been evaluated using the Monte Carlo code MCNP4B. The spatial and energy distribution of photons, radial and depth dose distributions in the phantom are calculated for the stationary and scanned photon beams from different targets. The calculated dose distributions are compared to the experimental data using a silicon diode detector. Measured and calculated depth-dose distributions are in fairly good agreement, within 2-3% for the positions in the range 2-30 cm in the phantom, whereas the larger discrepancies up to 10% are observed in the dose build-up region. For the stationary beams the differences in the calculated and measured radial dose distributions are about 2-10%. (orig.)

  8. Influence of chromatin condensation on the number of direct DSB damages induced by ions studied using a Monte Carlo code.

    Science.gov (United States)

    Dos Santos, M; Clairand, I; Gruel, G; Barquinero, J F; Incerti, S; Villagrasa, C

    2014-10-01

    The purpose of this work is to evaluate the influence of the chromatin condensation on the number of direct double-strand break (DSB) damages induced by ions. Two geometries of chromosome territories containing either condensed or decondensed chromatin were implemented as biological targets in the Geant4 Monte Carlo simulation code and proton and alpha irradiation was simulated using the Geant4-DNA processes. A DBSCAN algorithm was used in order to detect energy deposition clusters that could give rise to single-strand breaks or DSBs on the DNA molecule. The results of this study show an increase in the number and complexity of DNA DSBs in condensed chromatin when compared with decondensed chromatin.

  9. Initial validation of 4D-model for a clinical PET scanner using the Monte Carlo code gate

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Igor F.; Lima, Fernando R.A.; Gomes, Marcelo S., E-mail: falima@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Vieira, Jose W.; Pacheco, Ludimila M. [Instituto Federal de Educacao, Ciencia e Tecnologia (IFPE), Recife, PE (Brazil); Chaves, Rosa M. [Instituto de Radium e Supervoltagem Ivo Roesler, Recife, PE (Brazil)

    2011-07-01

    Building exposure computational models (ECM) of emission tomography (PET and SPECT) currently has several dedicated computing tools based on Monte Carlo techniques (SimSET, SORTEO, SIMIND, GATE). This paper is divided into two steps: (1) using the dedicated code GATE (Geant4 Application for Tomographic Emission) to build a 4D model (where the fourth dimension is the time) of a clinical PET scanner from General Electric, GE ADVANCE, simulating the geometric and electronic structures suitable for this scanner, as well as some phenomena 4D, for example, rotating gantry; (2) the next step is to evaluate the performance of the model built here in the reproduction of test noise equivalent count rate (NEC) based on the NEMA Standards Publication NU protocols 2-2007 for this tomography. The results for steps (1) and (2) will be compared with experimental and theoretical values of the literature showing actual state of art of validation. (author)

  10. Energy distribution of cosmic rays in the Earth’s atmosphere and avionic area using Monte Carlo codes

    Indian Academy of Sciences (India)

    MOHAMED M OULD; DIB A S A; BELBACHIR A H

    2016-07-01

    Cosmic rays cause significant damage to the electronic equipments of the aircrafts. In this paper, we have investigated the accumulation of the deposited energy of cosmic rays on the Earth’s atmosphere, especially in the aircraft area. In fact, if a high-energy neutron or proton interacts with a nanodevice having only a few atoms, this neutron or proton particle can change the nature of this device and destroy it. Our simulation based on Monte Carlo using Geant4 code shows that the deposited energy of neutron particles ranging between 200MeV and 5 GeV are strongly concentrated in the region between 10 and 15 km from the sea level which is exactly the avionic area. However, the Bragg peak energy of proton particle is slightly localized above the avionic area.

  11. Characterisation of the TRIUMF neutron facility using a Monte Carlo simulation code.

    Science.gov (United States)

    Monk, S D; Abram, T; Joyce, M J

    2015-04-01

    Here, the characterisation of the high-energy neutron field at TRIUMF (The Tri Universities Meson Facility, Vancouver, British Columbia) with Monte Carlo simulation software is described. The package used is MCNPX version 2.6.0, with the neutron fluence rate determined at three locations within the TRIUMF Thermal Neutron Facility (TNF), including the exit of the neutron channel where users of the facility can test devices that may be susceptible to the effects of this form of radiation. The facility is often used to roughly emulate the field likely to be encountered at high altitudes due to radiation of galactic origin and thus the simulated information is compared with the energy spectrum calculated to be due to neutron radiation of cosmic origin at typical aircraft altitudes. The calculated values were also compared with neutron flux measurements that were estimated using the activation of various foils by the staff of the facility, showing agreement within an order of magnitude.

  12. A Monte Carlo transport code study of the space radiation environment using FLUKA and ROOT

    CERN Document Server

    Wilson, T; Carminati, F; Brun, R; Ferrari, A; Sala, P; Empl, A; MacGibbon, J

    2001-01-01

    We report on the progress of a current study aimed at developing a state-of-the-art Monte-Carlo computer simulation of the space radiation environment using advanced computer software techniques recently available at CERN, the European Laboratory for Particle Physics in Geneva, Switzerland. By taking the next-generation computer software appearing at CERN and adapting it to known problems in the implementation of space exploration strategies, this research is identifying changes necessary to bring these two advanced technologies together. The radiation transport tool being developed is tailored to the problem of taking measured space radiation fluxes impinging on the geometry of any particular spacecraft or planetary habitat and simulating the evolution of that flux through an accurate model of the spacecraft material. The simulation uses the latest known results in low-energy and high-energy physics. The output is a prediction of the detailed nature of the radiation environment experienced in space as well a...

  13. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    Energy Technology Data Exchange (ETDEWEB)

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.

    2004-09-14

    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  14. A Students Attendance System Using QR Code

    Directory of Open Access Journals (Sweden)

    Fadi Masalha

    2014-01-01

    Full Text Available Smartphones are becoming more preferred companions to users than desktops or notebooks. Knowing that smartphones are most popular with users at the age around 26, using smartphones to speed up the process of taking attendance by university instructors would save lecturing time and hence enhance the educational process. This paper proposes a system that is based on a QR code, which is being displayed for students during or at the beginning of each lecture. The students will need to scan the code in order to confirm their attendance. The paper explains the high level implementation details of the proposed system. It also discusses how the system verifies student identity to eliminate false registrations.

  15. Interacting multiagent systems kinetic equations and Monte Carlo methods

    CERN Document Server

    Pareschi, Lorenzo

    2014-01-01

    The description of emerging collective phenomena and self-organization in systems composed of large numbers of individuals has gained increasing interest from various research communities in biology, ecology, robotics and control theory, as well as sociology and economics. Applied mathematics is concerned with the construction, analysis and interpretation of mathematical models that can shed light on significant problems of the natural sciences as well as our daily lives. To this set of problems belongs the description of the collective behaviours of complex systems composed by a large enough number of individuals. Examples of such systems are interacting agents in a financial market, potential voters during political elections, or groups of animals with a tendency to flock or herd. Among other possible approaches, this book provides a step-by-step introduction to the mathematical modelling based on a mesoscopic description and the construction of efficient simulation algorithms by Monte Carlo methods. The ar...

  16. EAI-oriented information classification code system in manufacturing enterprises

    Institute of Scientific and Technical Information of China (English)

    Junbiao WANG; Hu DENG; Jianjun JIANG; Binghong YANG; Bailing WANG

    2008-01-01

    Although the traditional information classifi-cation coding system in manufacturing enterprises (MEs) emphasizes the construction of code standards, it lacks the management of the code creation, code data transmission and so on. According to the demands of enterprise application integration (EAI) in manufacturing enter-prises, an enterprise application integration oriented information classification code system (EAIO-ICCS) is proposed. EAIO-ICCS expands the connotation of the information classification code system and assures the identity of the codes in manufacturing enterprises with unified management of codes at the view of its lifecycle.

  17. Time dependent simulations of multiwavelength variability of the blazar Mrk 421 with a Monte Carlo multi-zone code

    CERN Document Server

    Chen, Xuhui; Liang, Edison; Boettcher, Markus

    2011-01-01

    (abridged) We present a new time-dependent multi-zone radiative transfer code and its application to study the SSC emission of Mrk 421. The code couples Fokker-Planck and Monte Carlo methods, in a 2D geometry. For the first time all the light travel time effects (LCTE) are fully considered, along with a proper treatment of Compton cooling, which depends on them. We study a set of simple scenarios where the variability is produced by injection of relativistic electrons as a `shock front' crosses the emission region. We consider emission from two components, with the second one either being pre-existing and co-spatial and participating in the evolution of the active region, or spatially separated and independent, only diluting the observed variability. Temporal and spectral results of the simulation are compared to the multiwavelength observations of Mrk 421 in March 2001. We find parameters that can adequately fit the observed SEDs and multiwavelength light curves and correlations. There remain however a few o...

  18. The FLUKA Monte Carlo code coupled with the local effect model for biological calculations in carbon ion therapy

    CERN Document Server

    Mairani, A; Kraemer, M; Sommerer, F; Parodi, K; Scholz, M; Cerutti, F; Ferrari, A; Fasso, A

    2010-01-01

    Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fur Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed C-12 ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-d...

  19. CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC

    Science.gov (United States)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2014-06-01

    Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.

  20. A mean field theory of coded CDMA systems

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Toru [Graduate School of Science and Technology, Keio University, Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa 223-8522 (Japan); Tanaka, Toshiyuki [Graduate School of Informatics, Kyoto University, Yoshida Hon-machi, Sakyo-ku, Kyoto-shi, Kyoto 606-8501 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)], E-mail: yano@thx.appi.keio.ac.jp

    2008-08-15

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

  1. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    Energy Technology Data Exchange (ETDEWEB)

    Botta, F; Di Dia, A; Pedroli, G; Mairani, A; Battistoni, G; Fasso, A; Ferrari, A; Ferrari, M; Paganelli, G

    2011-06-01

    The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8

  2. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    Energy Technology Data Exchange (ETDEWEB)

    Botta, F.; Mairani, A.; Battistoni, G.; Cremonesi, M.; Di Dia, A.; Fasso, A.; Ferrari, A.; Ferrari, M.; Paganelli, G.; Pedroli, G.; Valente, M. [Medical Physics Department, European Institute of Oncology, Via Ripamonti 435, 20141 Milan (Italy); Istituto Nazionale di Fisica Nucleare (I.N.F.N.), Via Celoria 16, 20133 Milan (Italy); Medical Physics Department, European Institute of Oncology, Via Ripamonti 435, 20141 Milan (Italy); Jefferson Lab, 12000 Jefferson Avenue, Newport News, Virginia 23606 (United States); CERN, 1211 Geneva 23 (Switzerland); Medical Physics Department, European Institute of Oncology, Milan (Italy); Nuclear Medicine Department, European Institute of Oncology, Via Ripamonti 435, 2014 Milan (Italy); Medical Physics Department, European Institute of Oncology, Via Ripamonti 435, 20141 Milan (Italy); FaMAF, Universidad Nacional de Cordoba and CONICET, Cordoba, Argentina C.P. 5000 (Argentina)

    2011-07-15

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons

  3. Subtle Monte Carlo Updates in Dense Molecular Systems.

    Science.gov (United States)

    Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper

    2012-02-14

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.

  4. Development of Momentum Conserving Monte Carlo Simulation Code for ECCD Study in Helical Plasmas

    Directory of Open Access Journals (Sweden)

    Murakami S.

    2015-01-01

    Full Text Available Parallel momentum conserving collision model is developed for GNET code, in which a linearized drift kinetic equation is solved in the five dimensional phase-space to study the electron cyclotron current drive (ECCD in helical plasmas. In order to conserve the parallel momentum, we introduce a field particle collision term in addition to the test particle collision term. Two types of the field particle collision term are considered. One is the high speed limit model, where the momentum conserving term does not depend on the velocity of the background plasma and can be expressed in a simple form. The other is the velocity dependent model, which is derived from the Fokker–Planck collision term directly. In the velocity dependent model the field particle operator can be expressed using Legendre polynominals and, introducing the Rosenbluth potential, we derive the field particle term for each Legendre polynominals. In the GNET code, we introduce an iterative process to implement the momentum conserving collision operator. The high speed limit model is applied to the ECCD simulation of the heliotron-J plasma. The simulation results show a good conservation of the momentum with the iterative scheme.

  5. Development of Momentum Conserving Monte Carlo Simulation Code for ECCD Study in Helical Plasmas

    Science.gov (United States)

    Murakami, S.; Hasegawa, S.; Moriya, Y.

    2015-03-01

    Parallel momentum conserving collision model is developed for GNET code, in which a linearized drift kinetic equation is solved in the five dimensional phase-space to study the electron cyclotron current drive (ECCD) in helical plasmas. In order to conserve the parallel momentum, we introduce a field particle collision term in addition to the test particle collision term. Two types of the field particle collision term are considered. One is the high speed limit model, where the momentum conserving term does not depend on the velocity of the background plasma and can be expressed in a simple form. The other is the velocity dependent model, which is derived from the Fokker-Planck collision term directly. In the velocity dependent model the field particle operator can be expressed using Legendre polynominals and, introducing the Rosenbluth potential, we derive the field particle term for each Legendre polynominals. In the GNET code, we introduce an iterative process to implement the momentum conserving collision operator. The high speed limit model is applied to the ECCD simulation of the heliotron-J plasma. The simulation results show a good conservation of the momentum with the iterative scheme.

  6. Monte Carlo simulations of systems with complex energy landscapes

    Science.gov (United States)

    Wüst, T.; Landau, D. P.; Gervais, C.; Xu, Y.

    2009-04-01

    Non-traditional Monte Carlo simulations are a powerful approach to the study of systems with complex energy landscapes. After reviewing several of these specialized algorithms we shall describe the behavior of typical systems including spin glasses, lattice proteins, and models for "real" proteins. In the Edwards-Anderson spin glass it is now possible to produce probability distributions in the canonical ensemble and thermodynamic results of high numerical quality. In the hydrophobic-polar (HP) lattice protein model Wang-Landau sampling with an improved move set (pull-moves) produces results of very high quality. These can be compared with the results of other methods of statistical physics. A more realistic membrane protein model for Glycophorin A is also examined. Wang-Landau sampling allows the study of the dimerization process including an elucidation of the nature of the process.

  7. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  8. Performance Analysis of Korean Liquid metal type TBM based on Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, C. H.; Han, B. S.; Park, H. J.; Park, D. K. [Seoul National Univ., Seoul (Korea, Republic of)

    2007-01-15

    The objective of this project is to analyze a nuclear performance of the Korean HCML(Helium Cooled Molten Lithium) TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean HCML ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for analyzing the neutronic design of the Korean HCML TBM and the DEMO fusion blanket, and improving the nuclear performances. The results of the numerical experiments performed in this project will be utilized further for a design optimization of the Korean HCML TBM. In this project, Monte Carlo transport calculations for evaluating TBR (Tritium Breeding Ratio) and EMF (Energy Multiplication factor) were conducted to analyze a nuclear performance of the Korean HCML TBM. The activation characteristics and shielding performances for the Korean HCML TBM were analyzed using ORIGEN and MCCARD. We proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket, which was applied to the blanket analysis of a DEMO fusion reactor. In the results, the TBR of the Korean HCML ITER TBM is 0.1352 and the EMF is 1.362. Taking into account a limitation for the Li amount in ITER TBM, it is expected that tritium self-sufficiency condition can be satisfied through a change of the Li quantity and enrichment. In the results of activation and shielding analysis, the activity drops to 1.5% of the initial value and the decay heat drops to 0.02% of the initial amount after 10 years from plasma shutdown.

  9. Overview of Particle and Heavy Ion Transport Code System PHITS

    Science.gov (United States)

    Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit

    2014-06-01

    A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.

  10. Blind Recognition Algorithm of Turbo Codes for Communication Intelligence Systems

    Directory of Open Access Journals (Sweden)

    Ali Naseri

    2011-11-01

    Full Text Available Turbo codes are widely used in land and space radio communication systems, and because of complexity of structure, are custom in military communication systems. In electronic warfare, COMINT systems make attempt to recognize codes by blind ways. In this Paper, the algorithm is proposed for blind recognition of turbo code parameters like code kind, code-word length, code rate, length of interleaver and delay blocks number of convolution code. The algorithm calculations volume is0.5L3+1.25L, therefore it is suitable for real time systems.

  11. Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study

    Science.gov (United States)

    Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald

    2015-03-01

    Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was

  12. Thermal neutron response of a boron-coated GEM detector via GEANT4 Monte Carlo code.

    Science.gov (United States)

    Jamil, M; Rhee, J T; Kim, H G; Ahmad, Farzana; Jeon, Y J

    2014-10-22

    In this work, we report the design configuration and the performance of the hybrid Gas Electron Multiplier (GEM) detector. In order to make the detector sensitive to thermal neutrons, the forward electrode of the GEM has been coated with the enriched boron-10 material, which works as a neutron converter. A total of 5×5cm(2) configuration of GEM has been used for thermal neutron studies. The response of the detector has been estimated via using GEANT4 MC code with two different physics lists. Using the QGSP_BIC_HP physics list, the neutron detection efficiency was determined to be about 3%, while with QGSP_BERT_HP physics list the efficiency was around 2.5%, at the incident thermal neutron energies of 25meV. The higher response of the detector proves that GEM-coated with boron converter improves the efficiency for thermal neutrons detection.

  13. Development of NRESP98 Monte Carlo codes for the calculation of neutron response functions of neutron detectors. Calculation of the response function of spherical BF{sub 3} proportional counter

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, M.; Saito, K.; Ando, H. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-05-01

    The method to calculate the response function of spherical BF{sub 3} proportional counter, which is commonly used as neutron dose rate meter and neutron spectrometer with multi moderator system, is developed. As the calculation code for evaluating the response function, the existing code series NRESP, the Monte Carlo code for the calculation of response function of neutron detectors, is selected. However, the application scope of the existing NRESP is restricted, the NRESP98 is tuned as generally applicable code, with expansion of the geometrical condition, the applicable element, etc. The NRESP98 is tested with the response function of the spherical BF{sub 3} proportional counter. Including the effect of the distribution of amplification factor, the detailed evaluation of the charged particle transportation and the effect of the statistical distribution, the result of NRESP98 calculation fit the experience within {+-}10%. (author)

  14. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M J; Procassini, R J; Joy, K I

    2009-03-09

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

  15. Integrated burnup calculation code system SWAT

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Hirakawa, Naohiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Iwasaki, Tomohiko

    1997-11-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. It enables us to analyze the burnup problem using neutron spectrum depending on environment of irradiation, combining SRAC which is Japanese standard thermal reactor analysis code system and ORIGEN2 which is burnup code widely used all over the world. SWAT makes effective cross section library based on results by SRAC, and performs the burnup analysis with ORIGEN2 using that library. SRAC and ORIGEN2 can be called as external module. SWAT has original cross section library on based JENDL-3.2 and libraries of fission yield and decay data prepared from JNDC FP Library second version. Using these libraries, user can use latest data in the calculation of SWAT besides the effective cross section prepared by SRAC. Also, User can make original ORIGEN2 library using the output file of SWAT. This report presents concept and user`s manual of SWAT. (author)

  16. Introduction to the simulation with MCNP Monte Carlo code and its applications in Medical Physics; Introduccion a la simulacion con el codigo de Monte Carlo MCNP y sus aplicaciones en Fisica Medica

    Energy Technology Data Exchange (ETDEWEB)

    Parreno Z, F.; Paucar J, R.; Picon C, C. [Instituto Peruano de Energia Nuclear, Av. Canada 1470, San Borja, Lima 41 (Peru)

    1998-12-31

    The simulation by Monte Carlo is tool which Medical Physics counts with it for the development of its research, the interest by this tool is growing, as we may observe in the main scientific journals for the years 1995-1997 where more than 27 % of the papers treat over Monte Carlo and/or its applications in the radiation transport.In the Peruvian Institute of Nuclear Energy we are implementing and making use of the MCNP4 and EGS4 codes. In this work are presented the general features of the Monte Carlo method and its more useful applications in Medical Physics. Likewise, it is made a simulation of the calculation of isodose curves in an interstitial treatment with Ir-192 wires in a mammary gland carcinoma. (Author)

  17. Commissioning of a Monte Carlo treatment planning system for clinical use in radiation therapy; Evaluacion de un sistema de planificacion Monte Carlo de uso clinico para radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Zucca Aparcio, D.; Perez Moreno, J. M.; Fernandez Leton, P.; Garcia Ruiz-Zorrila, J.

    2016-10-01

    The commissioning procedures of a Monte Carlo treatment planning system (MC) for photon beams from a dedicated stereotactic body radiosurgery (SBRT) unit has been reported in this document. XVMC has been the MC Code available in the treatment planning system evaluated (BrainLAB iPlan RT Dose) which is based on Virtual Source Models that simulate the primary and scattered radiation, besides the electronic contamination, using gaussian components for whose modelling are required measurements of dose profiles, percentage depth dose and output factors, performed both in water and in air. The dosimetric accuracy of the particle transport simulation has been analyzed by validating the calculations in homogeneous and heterogeneous media versus measurements made under the same conditions as the dose calculation, and checking the stochastic behaviour of Monte Carlo calculations when using different statistical variances. Likewise, it has been verified how the planning system performs the conversion from dose to medium to dose to water, applying the stopping power ratio water to medium, in the presence of heterogeneities where this phenomenon is relevant, such as high density media (cortical bone). (Author)

  18. Monte Carlo simulation using the PENELOPE code with an ant colony algorithm to study MOSFET detectors

    Energy Technology Data Exchange (ETDEWEB)

    Carvajal, M A; Palma, A J [Departamento de Electronica y Tecnologia de Computadores, Universidad de Granada, E-18071 Granada (Spain); Garcia-Pareja, S [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda Carlos Haya, s/n, E-29010 Malaga (Spain); Guirado, D [Servicio de RadiofIsica, Hospital Universitario ' San Cecilio' , Avda Dr Oloriz, 16, E-18012 Granada (Spain); Vilches, M [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda Fuerzas Armadas, 2, E-18014 Granada (Spain); Anguiano, M; Lallena, A M [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)], E-mail: carvajal@ugr.es, E-mail: garciapareja@gmail.com, E-mail: dguirado@ugr.es, E-mail: mvilches@ugr.es, E-mail: mangui@ugr.es, E-mail: ajpalma@ugr.es, E-mail: lallena@ugr.es

    2009-10-21

    In this work we have developed a simulation tool, based on the PENELOPE code, to study the response of MOSFET devices to irradiation with high-energy photons. The energy deposited in the extremely thin silicon dioxide layer has been calculated. To reduce the statistical uncertainties, an ant colony algorithm has been implemented to drive the application of splitting and Russian roulette as variance reduction techniques. In this way, the uncertainty has been reduced by a factor of {approx}5, while the efficiency is increased by a factor of above 20. As an application, we have studied the dependence of the response of the pMOS transistor 3N163, used as a dosimeter, with the incidence angle of the radiation for three common photons sources used in radiotherapy: a {sup 60}Co Theratron-780 and the 6 and 18 MV beams produced by a Mevatron KDS LINAC. Experimental and simulated results have been obtained for gantry angles of 0 deg., 15 deg., 30 deg., 45 deg., 60 deg. and 75 deg. The agreement obtained has permitted validation of the simulation tool. We have studied how to reduce the angular dependence of the MOSFET response by using an additional encapsulation made of brass in the case of the two LINAC qualities considered.

  19. Monte Carlo Alpha Iteration Algorithm for a Subcritical System Analysis

    Directory of Open Access Journals (Sweden)

    Hyung Jin Shim

    2015-01-01

    Full Text Available The α-k iteration method which searches the fundamental mode alpha-eigenvalue via iterative updates of the fission source distribution has been successfully used for the Monte Carlo (MC alpha-static calculations of supercritical systems. However, the α-k iteration method for the deep subcritical system analysis suffers from a gigantic number of neutron generations or a huge neutron weight, which leads to an abnormal termination of the MC calculations. In order to stably estimate the prompt neutron decay constant (α of prompt subcritical systems regardless of subcriticality, we propose a new MC alpha-static calculation method named as the α iteration algorithm. The new method is derived by directly applying the power method for the α-mode eigenvalue equation and its calculation stability is achieved by controlling the number of time source neutrons which are generated in proportion to α divided by neutron speed in MC neutron transport simulations. The effectiveness of the α iteration algorithm is demonstrated for two-group homogeneous problems with varying the subcriticality by comparisons with analytic solutions. The applicability of the proposed method is evaluated for an experimental benchmark of the thorium-loaded accelerator-driven system.

  20. Investigation of Dosimetric Parameters of $^{192}$Ir MicroSelectron v2 HDR Brachytherapy Source Using EGSnrc Monte Carlo Code

    CERN Document Server

    Naeem, Hamza; Zheng, Huaqing; Cao, Ruifen; Pei, Xi; Hu, Liqin; Wu, Yican

    2016-01-01

    The $^{192}$Ir sources are widely used for high dose rate (HDR) brachytherapy treatments. The aim of this study is to simulate $^{192}$Ir MicroSelectron v2 HDR brachytherapy source and calculate the air kerma strength, dose rate constant, radial dose function and anisotropy function established in the updated AAPM Task Group 43 protocol. The EGSnrc Monte Carlo (MC) code package is used to calculate these dosimetric parameters, including dose contribution from secondary electron source and also contribution of bremsstrahlung photons to air kerma strength. The Air kerma strength, dose rate constant and radial dose function while anisotropy functions for the distance greater than 0.5 cm away from the source center are in good agreement with previous published studies. Obtained value from MC simulation for air kerma strength is $9.762\\times 10^{-8} \\textrm{UBq}^{-1}$and dose rate constant is $1.108\\pm 0.13\\%\\textrm{cGyh}^{-1} \\textrm{U}^{-1}$.

  1. EleCa: a Monte Carlo code for the propagation of extragalactic photons at ultra-high energy

    CERN Document Server

    Settimo, Mariangela

    2013-01-01

    Ultra high energy photons play an important role as an independent probe of the photo-pion production mechanism by UHE cosmic rays. Their observation, or non-observation, may constrain astrophysical scenarios for the origin of UHECRs and help to understand the nature of the flux suppression observed by several experiments at energies above 10$^{19.5}$ eV. Whereas the interaction length of UHE photons above 10$^{17}$ eV is only of a few hundred kpc up to tenths of Mpc, photons can interact with the extragalactic background radiation leading to the development of electromagnetic cascades which affect the fluxes of photons observed at Earth. The interpretation of the current experimental results rely on the simulations of the UHE photon propagation. In this contribution, we present the novel Monte Carlo code "EleCa" to simulate the \\emph{Ele}ctromagnetic \\emph{Ca}scading initiated by high-energy photons and electrons. The distance within which we expect to observe UHE photons is discussed and the flux of GZK pho...

  2. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  3. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    Energy Technology Data Exchange (ETDEWEB)

    Campioni, Guillaume; Mounier, Claude [Commissariat a l' Energie Atomique, CEA, 31-33, rue de la Federation, 75752 Paris cedex (France)

    2006-07-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  4. SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE

    Science.gov (United States)

    Costello, F. A.

    1994-01-01

    The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component

  5. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  6. Simulation about Self-absorption of Ni-63 Nuclear Battery Using Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Ho; Kim, Ji Hyun [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2014-05-15

    The radioisotope batteries have an energy density of 100-10000 times greater than chemical batteries. Also, Li ion battery has the fundamental problems such as short life time and requires recharge system. In addition to these things, the existing batteries are hard to operate at internal human body, national defense arms or space environment. Since the development of semiconductor process and materials technology, the micro device is much more integrated. It is expected that, based on new semiconductor technology, the conversion device efficiency of betavoltaic battery will be highly increased. Furthermore, the radioactivity from the beta particle cannot penetrate a skin of human body, so it is safer than Li battery which has the probability to explosion. In the other words, the interest for radioisotope battery is increased because it can be applicable to an artificial internal organ power source without recharge and replacement, micro sensor applied to arctic and special environment, small size military equipment and space industry. However, there is not enough data for beta particle fluence from radioisotope source using nuclear battery. Beta particle fluence directly influences on battery efficiency and it is seriously affected by radioisotope source thickness because of self-absorption effect. Therefore, in this article, we present a basic design of Ni-63 nuclear battery and simulation data of beta particle fluence with various thickness of radioisotope source and design of battery.

  7. Energy and resolution calibration of NaI(Tl) and LaBr{sub 3}(Ce) scintillators and validation of an EGS5 Monte Carlo user code for efficiency calculations

    Energy Technology Data Exchange (ETDEWEB)

    Casanovas, R., E-mail: ramon.casanovas@urv.cat [Unitat de Fisica Medica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain); Morant, J.J. [Servei de Proteccio Radiologica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain); Salvado, M. [Unitat de Fisica Medica, Facultat de Medicina i Ciencies de la Salut, Universitat Rovira i Virgili, ES-43201 Reus (Tarragona) (Spain)

    2012-05-21

    The radiation detectors yield the optimal performance if they are accurately calibrated. This paper presents the energy, resolution and efficiency calibrations for two scintillation detectors, NaI(Tl) and LaBr{sub 3}(Ce). For the two former calibrations, several fitting functions were tested. To perform the efficiency calculations, a Monte Carlo user code for the EGS5 code system was developed with several important implementations. The correct performance of the simulations was validated by comparing the simulated spectra with the experimental spectra and reproducing a number of efficiency and activity calculations. - Highlights: Black-Right-Pointing-Pointer NaI(Tl) and LaBr{sub 3}(Ce) scintillation detectors are used for gamma-ray spectrometry. Black-Right-Pointing-Pointer Energy, resolution and efficiency calibrations are discussed for both detectors. Black-Right-Pointing-Pointer For the two former calibrations, several fitting functions are tested. Black-Right-Pointing-Pointer A Monte Carlo user code for EGS5 was developed for the efficiency calculations. Black-Right-Pointing-Pointer The code was validated reproducing some efficiency and activity calculations.

  8. Optical code division multiple access secure communications systems with rapid reconfigurable polarization shift key user code

    Science.gov (United States)

    Gao, Kaiqiang; Wu, Chongqing; Sheng, Xinzhi; Shang, Chao; Liu, Lanlan; Wang, Jian

    2015-09-01

    An optical code division multiple access (OCDMA) secure communications system scheme with rapid reconfigurable polarization shift key (Pol-SK) bipolar user code is proposed and demonstrated. Compared to fix code OCDMA, by constantly changing the user code, the performance of anti-eavesdropping is greatly improved. The Pol-SK OCDMA experiment with a 10 Gchip/s user code and a 1.25 Gb/s user data of payload has been realized, which means this scheme has better tolerance and could be easily realized.

  9. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  10. Improved FEC Code Based on Concatenated Code for Optical Transmission Systems

    Institute of Scientific and Technical Information of China (English)

    YUAN Jian-guo; JIANG Ze; MAO You-ju

    2006-01-01

    The improved three novel schemes of the super forward error correction(super-FEC) concatenated codes are proposed after the development trend of long-haul optical transmission systems and the defects of the existing FEC codes have been analyzed. The performance simulation of the Reed-Solomon(RS)+Bose-Chaudhuri-Hocguenghem(BCH) inner-outer serial concatenated code is implemented and the conceptions of encoding/decoding the parallel-concatenated code are presented. Furthermore,the simulation results for the RS(255,239)+RS(255,239) code and the RS(255,239)+RS(255,223) code show that the two consecutive concatenated codes are a superior coding scheme with such advantages as the better error correction,moderate redundancy and easy realization compared to the classic RS(255,239) code and other codes,and their signal to noise ratio gains are respectively 2~3 dB more than that of the RS(255,239)code at the bit error rate of 1×10-13. Finally,the frame structure of the novel consecutive concatenated code is arranged to lay a firm foundation in designing its hardware.

  11. On Analyzing LDPC Codes over Multiantenna MC-CDMA System

    Directory of Open Access Journals (Sweden)

    S. Suresh Kumar

    2014-01-01

    Full Text Available Multiantenna multicarrier code-division multiple access (MC-CDMA technique has been attracting much attention for designing future broadband wireless systems. In addition, low-density parity-check (LDPC code, a promising near-optimal error correction code, is also being widely considered in next generation communication systems. In this paper, we propose a simple method to construct a regular quasicyclic low-density parity-check (QC-LDPC code to improve the transmission performance over the precoded MC-CDMA system with limited feedback. Simulation results show that the coding gain of the proposed QC-LDPC codes is larger than that of the Reed-Solomon codes, and the performance of the multiantenna MC-CDMA system can be greatly improved by these QC-LDPC codes when the data rate is high.

  12. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.

    2010-02-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.

  13. Application of a non-steady-state orbit-following Monte-Carlo code to neutron modeling in the MAST spherical tokamak

    Science.gov (United States)

    Tani, K.; Shinohara, K.; Oikawa, T.; Tsutsui, H.; McClements, K. G.; Akers, R. J.; Liu, Y. Q.; Suzuki, M.; Ide, S.; Kusama, Y.; Tsuji-Iio, S.

    2016-11-01

    As part of the verification and validation of a newly developed non-steady-state orbit-following Monte-Carlo code, application studies of time dependent neutron rates have been made for a specific shot in the Mega Amp Spherical Tokamak (MAST) using 3D fields representing vacuum resonant magnetic perturbations (RMPs) and toroidal field (TF) ripples. The time evolution of density, temperature and rotation rate in the application of the code to MAST are taken directly from experiment. The calculation results approximately agree with the experimental data. It is also found that a full orbit-following scheme is essential to reproduce the neutron rates in MAST.

  14. TARTNP: a coupled neutron--photon Monte Carlo transport code. [10-/sup 9/ to 20 MeV; in LLL FORTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Plechaty, E.F.; Kimlinger, J.R.

    1976-07-04

    A Monte Carlo code was written that calculates the transport of neutrons, photons, and neutron-induced photons. The cross sections of these particles are derived from TARTNP's data base, the Evaluated Nuclear Data Library. The energy range of the neutron data in the Library is 10/sup -9/ MeV to 20 MeV; the photon energy range is 1 keV to 20 MeV. One of the chief advantages of the code is its flexibility: it allows up to 17 different kinds of output to be evaluated in the same problem.

  15. Verification of Monte Carlo transport codes against measured small angle p-, d-, and t-emission in carbon fragmentation at 600 MeV/nucleon

    Energy Technology Data Exchange (ETDEWEB)

    Abramov, B. M. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Alekseev, P. N. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Borodin, Yu. A. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Bulychjov, S. A. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Dukhovskoy, I. A. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Krutenkova, A. P. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Martemianov, M. A. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Matsyuk, M. A. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Turdakina, E. N. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Khanov, A. I. [Inst. of Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); Mashnik, Stepan Georgievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-03

    Momentum spectra of hydrogen isotopes have been measured at 3.5° from 12C fragmentation on a Be target. Momentum spectra cover both the region of fragmentation maximum and the cumulative region. Differential cross sections span five orders of magnitude. The data are compared to predictions of four Monte Carlo codes: QMD, LAQGSM, BC, and INCL++. There are large differences between the data and predictions of some models in the high momentum region. The INCL++ code gives the best and almost perfect description of the data.

  16. Absorbed dose estimations of 131I for critical organs using the GEANT4 Monte Carlo simulation code

    Institute of Scientific and Technical Information of China (English)

    Ziaur Rahman; Shakeel ur Rehman; Waheed Arshed; Nasir M Mirza; Abdul Rashid; Jahan Zeb

    2012-01-01

    The aim of this study is to compare the absorbed doses of critical organs of 131I using the MIRD (Medical Internal Radiation Dose) with the corresponding predictions made by GEANT4 simulations.S-values (mean absorbed dose rate per unit activity) and energy deposition per decay for critical organs of 131I for various ages,using standard cylindrical phantom comprising water and ICRP soft-tissue material,have also been estimated.In this study the effect of volume reduction of thyroid,during radiation therapy,on the calculation of absorbed dose is also being estimated using GEANT4.Photon specific energy deposition in the other organs of the neck,due to 131I decay in the thyroid organ,has also been estimated.The maximum relative difference of MIRD with the GEANT4 simulated results is 5.64% for an adult's critical organs of 131I.Excellent agreement was found between the results of water and ICRP soft tissue using the cylindrical model.S-values are tabulated for critical organs of 131I,using 1,5,10,15 and 18 years (adults) individuals.S-values for a cylindrical thyroid of different sizes,having 3.07% relative differences of GEANT4 with Siegel & Stabin results.Comparison of the experimentally measured values at 0.5 and 1 m away from neck of the ionization chamber with GEANT4 based Monte Carlo simulations results show good agreement.This study shows that GEANT4 code is an important tool for the internal dosimetry calculations.

  17. Event-chain Monte Carlo algorithm for continuous spin systems and its application

    Science.gov (United States)

    Nishikawa, Yoshihiko; Hukushima, Koji

    2016-09-01

    The event-chain Monte Carlo (ECMC) algorithm is described for hard-sphere systems and general potential systems including interacting particle system and continuous spin systems. Using the ECMC algorithm, large-scale equilibrium Monte Carlo simulations are performed for a three-dimensional chiral helimagnetic model under a magnetic field. It is found that critical behavior of a phase transition changes with increasing the magnetic field.

  18. Fast quantum Monte Carlo on a GPU

    CERN Document Server

    Lutsyshyn, Y

    2013-01-01

    We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.

  19. Calculation of extrapolation curves in the 4π(LS)β-γ coincidence technique with the Monte Carlo code Geant4.

    Science.gov (United States)

    Bobin, C; Thiam, C; Bouchard, J

    2016-03-01

    At LNE-LNHB, a liquid scintillation (LS) detection setup designed for Triple to Double Coincidence Ratio (TDCR) measurements is also used in the β-channel of a 4π(LS)β-γ coincidence system. This LS counter based on 3 photomultipliers was first modeled using the Monte Carlo code Geant4 to enable the simulation of optical photons produced by scintillation and Cerenkov effects. This stochastic modeling was especially designed for the calculation of double and triple coincidences between photomultipliers in TDCR measurements. In the present paper, this TDCR-Geant4 model is extended to 4π(LS)β-γ coincidence counting to enable the simulation of the efficiency-extrapolation technique by the addition of a γ-channel. This simulation tool aims at the prediction of systematic biases in activity determination due to eventual non-linearity of efficiency-extrapolation curves. First results are described in the case of the standardization (59)Fe. The variation of the γ-efficiency in the β-channel due to the Cerenkov emission is investigated in the case of the activity measurements of (54)Mn. The problem of the non-linearity between β-efficiencies is featured in the case of the efficiency tracing technique for the activity measurements of (14)C using (60)Co as a tracer.

  20. Next generation Zero-Code control system UI

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Developing ergonomic user interfaces for control systems is challenging, especially during machine upgrade and commissioning where several small changes may suddenly be required. Zero-code systems, such as *Inspector*, provide agile features for creating and maintaining control system interfaces. More so, these next generation Zero-code systems bring simplicity and uniformity and brake the boundaries between Users and Developers. In this talk we present *Inspector*, a CERN made Zero-code application development system, and we introduce the major differences and advantages of using Zero-code control systems to develop operational UI.

  1. A New Arithmetic Coding System Combining Source Channel Coding and MAP Decoding

    Institute of Scientific and Technical Information of China (English)

    PANG Yu-ye; SUN Jun; WANG Jia

    2007-01-01

    A new arithmetic coding system combining source channel coding and maximum a posteriori decoding were proposed.It combines source coding and error correction tasks into one unified process by introducing an adaptive forbidden symbol.The proposed system achieves fixed length code words by adaptively adjusting the probability of the forbidden symbol and adding tail digits of variable length.The corresponding improved MAP decoding metric was derived.The proposed system can improve the performance.Simulations were performed on AWGN channels with various noise levels by using both hard and soft decision with BPSK modulation.The results show its performance is slightly better than that of our adaptive arithmetic error correcting coding system using a forbidden symbol.

  2. MCNP-X Monte Carlo Code Application for Mass Attenuation Coefficients of Concrete at Different Energies by Modeling 3 × 3 Inch NaI(Tl Detector and Comparison with XCOM and Monte Carlo Data

    Directory of Open Access Journals (Sweden)

    Huseyin Ozan Tekin

    2016-01-01

    Full Text Available Gamma-ray measurements in various research fields require efficient detectors. One of these research fields is mass attenuation coefficients of different materials. Apart from experimental studies, the Monte Carlo (MC method has become one of the most popular tools in detector studies. An NaI(Tl detector has been modeled, and, for a validation study of the modeled NaI(Tl detector, the absolute efficiency of 3 × 3 inch cylindrical NaI(Tl detector has been calculated by using the general purpose Monte Carlo code MCNP-X (version 2.4.0 and compared with previous studies in literature in the range of 661–2620 keV. In the present work, the applicability of MCNP-X Monte Carlo code for mass attenuation of concrete sample material as building material at photon energies 59.5 keV, 80 keV, 356 keV, 661.6 keV, 1173.2 keV, and 1332.5 keV has been tested by using validated NaI(Tl detector. The mass attenuation coefficients of concrete sample have been calculated. The calculated results agreed well with experimental and some other theoretical results. The results specify that this process can be followed to determine the data on the attenuation of gamma-rays with other required energies in other materials or in new complex materials. It can be concluded that data from Monte Carlo is a strong tool not only for efficiency studies but also for mass attenuation coefficients calculations.

  3. User manual for version 4.3 of the Tripoli-4 Monte-Carlo method particle transport computer code; Notice d'utilisation du code Tripoli-4, version 4.3: code de transport de particules par la methode de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B

    2003-07-01

    This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k{sub eff} (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)

  4. Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns

    Science.gov (United States)

    Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.

    2006-01-01

    Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.

  5. Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1)

    Institute of Scientific and Technical Information of China (English)

    XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi

    2004-01-01

    Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.

  6. Private Computing and Mobile Code Systems

    NARCIS (Netherlands)

    Cartrysse, K.

    2005-01-01

    This thesis' objective is to provide privacy to mobile code. A practical example of mobile code is a mobile software agent that performs a task on behalf of its user. The agent travels over the network and is executed at different locations of which beforehand it is not known whether or not these ca

  7. Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units

    Science.gov (United States)

    Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark

    2012-02-01

    We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.

  8. Communication Systems Simulator with Error Correcting Codes Using MATLAB

    Science.gov (United States)

    Gomez, C.; Gonzalez, J. E.; Pardo, J. M.

    2003-01-01

    In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…

  9. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    Science.gov (United States)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.

  10. Simulating Strongly Correlated Electron Systems with Hybrid Monte Carlo

    Institute of Scientific and Technical Information of China (English)

    LIU Chuan

    2000-01-01

    Using the path integral representation, the Hubbard and the periodic Anderson model on D-dimensional cubic lattice are transformed into field theories of fermions in D + 1 dimensions. These theories at half-filling possess a positive definite real symmetry fermion matrix and can be simulated using the hybrid Monte Carlo method.

  11. Simulation of clinical X-ray tube using the Monte Carlo Method - PENELOPE code; Simulacao de um tubo de raios X clinico atraves do Metodo de Monte Carlo usando codigo PENELOPE

    Energy Technology Data Exchange (ETDEWEB)

    Albuquerque, M.A.G.; David, M.G.; Almeida, C.E. de; Magalhaes, L.A.G., E-mail: malbuqueque@hotmail.com [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil). Lab. de Ciencias Radiologicas; Bernal, M. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Braz, D. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil)

    2015-07-01

    Breast cancer is the most common type of cancer among women. The main strategy to increase the long-term survival of patients with this disease is the early detection of the tumor, and mammography is the most appropriate method for this purpose. Despite the reduction of cancer deaths, there is a big concern about the damage caused by the ionizing radiation to the breast tissue. To evaluate these measures it was modeled a mammography equipment, and obtained the depth spectra using the Monte Carlo method - PENELOPE code. The average energies of the spectra in depth and the half value layer of the mammography output spectrum. (author)

  12. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA)); Jow, H.N. (Sandia National Labs., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projections, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management.

  13. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T. (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs.

  14. Quasi-Monte Carlo methods for lattice systems: a first look

    CERN Document Server

    Jansen, K; Nube, A; Griewank, A; Müller-Preussker, M

    2013-01-01

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like 1/Sqrt(N), where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to 1/N. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  15. Monte Carlo based verification of a beam model used in a treatment planning system

    Science.gov (United States)

    Wieslander, E.; Knöös, T.

    2008-02-01

    Modern treatment planning systems (TPSs) usually separate the dose modelling into a beam modelling phase, describing the beam exiting the accelerator, followed by a subsequent dose calculation in the patient. The aim of this work is to use the Monte Carlo code system EGSnrc to study the modelling of head scatter as well as the transmission through multi-leaf collimator (MLC) and diaphragms in the beam model used in a commercial TPS (MasterPlan, Nucletron B.V.). An Elekta Precise linear accelerator equipped with an MLC has been modelled in BEAMnrc, based on available information from the vendor regarding the material and geometry of the treatment head. The collimation in the MLC direction consists of leafs which are complemented with a backup diaphragm. The characteristics of the electron beam, i.e., energy and spot size, impinging on the target have been tuned to match measured data. Phase spaces from simulations of the treatment head are used to extract the scatter from, e.g., the flattening filter and the collimating structures. Similar data for the source models used in the TPS are extracted from the treatment planning system, thus a comprehensive analysis is possible. Simulations in a water phantom, with DOSXYZnrc, are also used to study the modelling of the MLC and the diaphragms by the TPS. The results from this study will be helpful to understand the limitations of the model in the TPS and provide knowledge for further improvements of the TPS source modelling.

  16. AVS 3D Video Coding Technology and System

    Institute of Scientific and Technical Information of China (English)

    Siwei Ma; Shiqi Wang; Wen Gao

    2012-01-01

    Following the success of the audio video standard (AVS) for 2D video coding, in 2008, the China AVS workgroup started developing 3D video (3DV) coding techniques. In this paper, we discuss the background, technical features, and applications of AVS 3DV coding technology. We introduce two core techniques used in AVS 3DV coding: inter-view prediction and enhanced stereo packing coding. We elaborate on these techniques, which are used in the AVS real-time 3DV encoder. An application of the AVS 3DV coding system is presented to show the great practical value of this system. Simulation results show that the advanced techniques used in AVS 3DV coding provide remarkable coding gain compared with techniques used in a simulcast scheme.

  17. Noncoherent Spectral Optical CDMA System Using 1D Active Weight Two-Code Keying Codes

    Directory of Open Access Journals (Sweden)

    Bih-Chyun Yeh

    2016-01-01

    Full Text Available We propose a new family of one-dimensional (1D active weight two-code keying (TCK in spectral amplitude coding (SAC optical code division multiple access (OCDMA networks. We use encoding and decoding transfer functions to operate the 1D active weight TCK. The proposed structure includes an optical line terminal (OLT and optical network units (ONUs to produce the encoding and decoding codes of the proposed OLT and ONUs, respectively. The proposed ONU uses the modified cross-correlation to remove interferences from other simultaneous users, that is, the multiuser interference (MUI. When the phase-induced intensity noise (PIIN is the most important noise, the modified cross-correlation suppresses the PIIN. In the numerical results, we find that the bit error rate (BER for the proposed system using the 1D active weight TCK codes outperforms that for two other systems using the 1D M-Seq codes and 1D balanced incomplete block design (BIBD codes. The effective source power for the proposed system can achieve −10 dBm, which has less power than that for the other systems.

  18. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for recoil cross section spectra under neutron irradiation

    Science.gov (United States)

    Iwamoto, Yosuke; Ogawa, Tatsuhiko

    2017-04-01

    Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for 72Ge, 75As, 89Y, and 109Ag in the ENDF/B-VII.1 library, and for 90Zr and 55Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.

  19. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA)); Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian (Sandia National Labs., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems.

  20. Cell-veto Monte Carlo algorithm for long-range systems

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2016-09-01

    We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.

  1. ScintSim1: A new Monte Carlo simulation code for transport of optical photons in 2D arrays of scintillation detectors.

    Science.gov (United States)

    Mosleh-Shirazi, Mohammad Amin; Zarrini-Monfared, Zinat; Karbasi, Sareh; Zamani, Ali

    2014-01-01

    Two-dimensional (2D) arrays of thick segmented scintillators are of interest as X-ray detectors for both 2D and 3D image-guided radiotherapy (IGRT). Their detection process involves ionizing radiation energy deposition followed by production and transport of optical photons. Only a very limited number of optical Monte Carlo simulation models exist, which has limited the number of modeling studies that have considered both stages of the detection process. We present ScintSim1, an in-house optical Monte Carlo simulation code for 2D arrays of scintillation crystals, developed in the MATLAB programming environment. The code was rewritten and revised based on an existing program for single-element detectors, with the additional capability to model 2D arrays of elements with configurable dimensions, material, etc., The code generates and follows each optical photon history through the detector element (and, in case of cross-talk, the surrounding ones) until it reaches a configurable receptor, or is attenuated. The new model was verified by testing against relevant theoretically known behaviors or quantities and the results of a validated single-element model. For both sets of comparisons, the discrepancies in the calculated quantities were all detector optimization.

  2. Parallel J-W Monte Carlo Simulations of Thermal Phase Changes in Finite-size Systems

    CERN Document Server

    Radev, R

    2002-01-01

    The thermodynamic properties of 59 TeF6 clusters that undergo temperature-driven phase transitions have been calculated with a canonical J-walking Monte Carlo technique. A parallel code for simulations has been developed and optimized on SUN3500 and CRAY-T3E computers. The Lindemann criterion shows that the clusters transform from liquid to solid and then from one solid structure to another in the temperature region 60-130 K.

  3. Study of adaptive modulation and LDPC coding in multicarrier systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An adaptive modulation (AM) algorithm is proposed and the application of the adapting algorithm together with low-density parity-check (LDPC) codes in multicarrier systems is investigated.The AM algorithm is based on minimizing the average bit error rate (BER) of systems,the combination of AM algorithm and LDPC codes with different code rates (half and three-fourths) are studied.The proposed AM algorithm with that of Fischer et al is compared.Simulation results show that the performance of the proposed AM algorithm is better than that of the Fischer's algorithm.The results also show that application of the proposed AM algorithm together with LDPC codes can greatly improve the performance of multicarrier systems.Results also show that the performance of the proposed algorithm is degraded with an increase in code rate when code length is the same.

  4. Use of the ETA-1 reactor for the validation of the multi-group APOLLO2-MORET 5 code and the Monte Carlo continuous energy MORET 5 code

    Science.gov (United States)

    Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.

    2014-06-01

    The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.

  5. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  6. Monte Carlo analysis of a control technique for a tunable white lighting system

    DEFF Research Database (Denmark)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen

    2017-01-01

    A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...

  7. CELLDOSE: A Monte Carlo code to assess electron dose distribution - S values for {sup 131}I in spheres of various sizes

    Energy Technology Data Exchange (ETDEWEB)

    Champion, C. [Univ Metz, Lab Phys Mol et Collis, Inst Phys, F-57078 Metz 3 (France); Zanotti-Fregonara, P. [Commissariat Energie Atom, DSV, I2BM, SHFJ, LIME, Orsay (France); Hindie, E [Hop St Louis, AP-HP, Paris (France); Hindie, E. [Imagerie Mol Diagnost et Ciblage Therapeut, Ecole Doctorale B2T, IUH, Paris, Univ Paris 07 (France)

    2008-07-01

    Monte Carlo simulation can be particularly suitable for modeling the microscopic distribution of energy received by normal tissues or cancer cells and for evaluating the relative merits of different radiopharmaceuticals. We used a new code, CELLDOSE, to assess electron dose for isolated spheres with radii varying from 2,500 {mu}m down to 0.05 {mu}m, in which {sup 131}I is homogeneously distributed. Methods: All electron emissions of {sup 131}I were considered,including the whole {beta}{sup -} {sup 131}I spectrum, 108 internal conversion electrons, and 21 Auger electrons. The Monte Carlo track-structure code used follows all electrons down to an energy threshold E-cutoff 7.4 eV. Results: Calculated S values were in good agreement with published analytic methods, lying in between reported results for all experimental points. Our S values were also close to other published data using a Monte Carlo code. Contrary to the latter published results, our results show that dose distribution inside spheres is not homogeneous, with the dose at the outmost layer being approximately half that at the center. The fraction of electron energy retained within the spheres decreased with decreasing radius (r): 87.1 % for r 2,500 {mu}m, 8.73% for r 50 {mu}m, and 1.18% for r 5 {mu}m. Thus, a radioiodine concentration that delivers a dose of 100 Gy to a micro-metastasis of 2,500 {mu}m radius would deliver 10 Gy in a cluster of 50 {mu}m and only 1.4 Gy in an isolated cell. The specific contribution from Auger electrons varied from 0.25% for the largest sphere up to 76.8% for the smallest sphere. Conclusion: The dose to a tumor cell will depend on its position in a metastasis. For the treatment of very small metastases, {sup 131}I may not be the isotope of choice. When trying to kill isolated cells or a small cluster of cells with {sup 131}I, it is important to get the iodine as close as possible to the nucleus to get the enhancement factor from Auger electrons. The Monte Carlo code

  8. Rebuilding for Array Codes in Distributed Storage Systems

    CERN Document Server

    Wang, Zhiying; Bruck, Jehoshua

    2010-01-01

    In distributed storage systems that use coding, the issue of minimizing the communication required to rebuild a storage node after a failure arises. We consider the problem of repairing an erased node in a distributed storage system that uses an EVENODD code. EVENODD codes are maximum distance separable (MDS) array codes that are used to protect against erasures, and only require XOR operations for encoding and decoding. We show that when there are two redundancy nodes, to rebuild one erased systematic node, only 3/4 of the information needs to be transmitted. Interestingly, in many cases, the required disk I/O is also minimized.

  9. Path Weight Complementary Convolutional Code for Type-II Bit-Interleaved Coded Modulation Hybrid ARQ System

    Institute of Scientific and Technical Information of China (English)

    CHENG Yuxin; ZHANG Lei; YI Na; XIANG Haige

    2007-01-01

    Bit-interleaved coded modulation (BICM) is suitable to bandwidth-efficient communication systems. Hybrid automatic repeat request (HARQ) can provide more reliability to high-speed wireless data transmission. A new path weight complementary convolutional (PWCC) code used in the type-ll BICM-HARQ system is proposed. The PWCC code is composed of the original code and the complimentary code. The path in trellis with large hamming weight of the complimentary code is designed to compensate for the path in trellis with small hamming weight of the original code. Hence, both of the original code and the complimentary code can achieve the performance of the good code criterion of corresponding code rate. The throughput efficiency of the BICM-HARQ system wit PWCC code is higher than repeat code system, a little higher than puncture code system in low signal-to-noise ratio (SNR) values and much higher than puncture code system, the same as repeat code system in high SNR values. These results are confirmed by the simulation.

  10. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    Science.gov (United States)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  11. Effect of the electron transport through thin slabs on the simulation of linear electron accelerators of use in therapy: A comparative study of various Monte Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain)], E-mail: mvilches@ugr.es; Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain); Guerrero, R. [Servicio de Radiofisica, Hospital Universitario ' San Cecilio' , Avda. Dr. Oloriz, 16, E-18012 Granada (Spain); Anguiano, M.; Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    When a therapeutic electron linear accelerator is simulated using a Monte Carlo (MC) code, the tuning of the initial spectra and the renormalization of dose (e.g., to maximum axial dose) constitute a common practice. As a result, very similar depth dose curves are obtained for different MC codes. However, if renormalization is turned off, the results obtained with the various codes disagree noticeably. The aim of this work is to investigate in detail the reasons of this disagreement. We have found that the observed differences are due to non-negligible differences in the angular scattering of the electron beam in very thin slabs of dense material (primary foil) and thick slabs of very low density material (air). To gain insight, the effects of the angular scattering models considered in various MC codes on the dose distribution in a water phantom are discussed using very simple geometrical configurations for the LINAC. The MC codes PENELOPE 2003, PENELOPE 2005, GEANT4, GEANT3, EGSnrc and MCNPX have been used.

  12. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    Energy Technology Data Exchange (ETDEWEB)

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  13. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for petascale platforms and beyond.

    Science.gov (United States)

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-04-30

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible.

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  15. System Measures Errors Between Time-Code Signals

    Science.gov (United States)

    Cree, David; Venkatesh, C. N.

    1993-01-01

    System measures timing errors between signals produced by three asynchronous time-code generators. Errors between 1-second clock pulses resolved to 2 microseconds. Basic principle of computation of timing errors as follows: central processing unit in microcontroller constantly monitors time data received from time-code generators for changes in 1-second time-code intervals. In response to any such change, microprocessor buffers count of 16-bit internal timer.

  16. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  17. Development And Implementation Of Photonuclear Cross-section Data For Mutually Coupled Neutron-photon Transport Calculations In The Monte Carlo N-particle (mcnp) Radiation Transport Code

    CERN Document Server

    White, M C

    2000-01-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron tran...

  18. The EGS4 Code System: Solution of Gamma-ray and Electron Transport Problems

    Science.gov (United States)

    Nelson, W. R.; Namito, Yoshihito

    1990-03-01

    In this paper we present an overview of the EGS4 Code System -- a general purpose package for the Monte Carlo simulation of the transport of electrons and photons. During the last 10-15 years EGS has been widely used to design accelerators and detectors for high-energy physics. More recently the code has been found to be of tremendous use in medical radiation physics and dosimetry. The problem-solving capabilities of EGS4 will be demonstrated by means of a variety of practical examples. To facilitate this review, we will take advantage of a new add-on package, called SHOWGRAF, to display particle trajectories in complicated geometries. These are shown as 2-D laser pictures in the written paper and as photographic slides of a 3-D high-resolution color monitor during the oral presentation. 11 refs., 15 figs.

  19. New Dosimetric Interpretation of the DV50 Vessel-Steel Experiment Irradiated in the OSIRIS MTR Reactor Using the Monte-Carlo Code TRIPOLI-4®

    Directory of Open Access Journals (Sweden)

    Malouch Fadhel

    2016-01-01

    Full Text Available An irradiation program DV50 was carried out from 2002 to 2006 in the OSIRIS material testing reactor (CEA-Saclay center to assess the pressure vessel steel toughness curve for a fast neutron fluence (E > 1 MeV equivalent to a French 900-MWe PWR lifetime of 50 years. This program allowed the irradiation of 120 specimens out of vessel steel, subdivided in two successive irradiations DV50 n∘1 and DV50 n∘2. To measure the fast neutron fluence (E > 1 MeV received by specimens after each irradiation, sample holders were equipped with activation foils that were withdrawn at the end of irradiation for activity counting and processing. The fast effective cross-sections used in the dosimeter processing were determined with a specific calculation scheme based on the Monte-Carlo code TRIPOLI-3 (and the nuclear data ENDF/B-VI and IRDF-90. In order to put vessel-steel experiments at the same standard, a new dosimetric interpretation of the DV50 experiment has been performed by using the Monte-Carlo code TRIPOLI-4 and more recent nuclear data (JEFF3.1.1 and IRDF-2002. This paper presents a comparison of previous and recent calculations performed for the DV50 vessel-steel experiment to assess the impact on the dosimetric interpretation.

  20. 14 CFR Sec. 1-4 - System of accounts coding.

    Science.gov (United States)

    2010-01-01

    ... General Accounting Provisions Sec. 1-4 System of accounts coding. (a) A four digit control number is... digit code assigned to each profit and loss account denote a detailed area of financial activity or... sequentially within blocks, designating more general classifications of financial activity and...

  1. Synchronous parallel kinetic Monte Carlo Diffusion in Heterogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Saez, Enrique [Los Alamos National Laboratory; Hetherly, Jeffery [Los Alamos National Laboratory; Caro, Jose A [Los Alamos National Laboratory

    2010-12-06

    A new hybrid Molecular Dynamics-kinetic Monte Carlo algorithm has been developed in order to study the basic mechanisms taking place in diffusion in concentrated alloys under the action of chemical and stress fields. Parallel implementation of the k-MC part based on a recently developed synchronous algorithm [1. Compo Phys. 227 (2008) 3804-3823] resorting on the introduction of a set of null events aiming at synchronizing the time for the different subdomains, added to the parallel efficiency of MD, provides the computer power required to evaluate jump rates 'on the flight', incorporating in this way the actual driving force emerging from chemical potential gradients, and the actual environment-dependent jump rates. The time gain has been analyzed and the parallel performance reported. The algorithm is tested on simple diffusion problems to verify its accuracy.

  2. Emission from Very Small Grains and PAH Molecules in Monte Carlo Radiation Transfer Codes: Application to the Edge-On Disk of Gomez's Hamburger

    Science.gov (United States)

    Wood, Kenneth; Whitney, Barbara A.; Robitaille, Thomas; Draine, Bruce T.

    2008-12-01

    We have modeled optical to far-infrared images, photometry, and spectroscopy of the object known as Gomez's Hamburger. We reproduce the images and spectrum with an edge-on disk of mass 0.3 M⊙ and radius 1600 AU, surrounding an A0 III star at a distance of 280 pc. Our mass estimate is in excellent agreement with recent CO observations. However, our distance determination is more than an order of magnitude smaller than previous analyses, which inaccurately interpreted the optical spectrum. To accurately model the infrared spectrum we have extended our Monte Carlo radiation transfer codes to include emission from polycyclic aromatic hydrocarbon (PAH) molecules and very small grains (VSG). We do this using precomputed PAH/VSG emissivity files for a wide range of values of the mean intensity of the exciting radiation field. When Monte Carlo energy packets are absorbed by PAHs/VSGs, we reprocess them to other wavelengths by sampling from the emissivity files, thus simulating the absorption and reemission process without reproducing lengthy computations of statistical equilibrium, excitation, and de-excitation in the complex many-level molecules. Using emissivity lookup tables in our Monte Carlo codes gives us the flexibility to use the latest grain physics calculations of PAH/VSG emissivity and opacity that are being continually updated in the light of higher resolution infrared spectra. We find our approach gives a good representation of the observed PAH spectrum from the disk of Gomez's Hamburger. Our models also indicate that the PAHs/VSGs in the disk have a larger scale height than larger radiative equilibrium grains, providing evidence for dust coagulation and settling to the midplane.

  3. Application of Photon Transport Monte Carlo Module with GPU-based Parallel System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Je [Sejong University, Seoul (Korea, Republic of); Shon, Heejeong [Golden Eng. Co. LTD, Seoul (Korea, Republic of); Lee, Donghak [CoCo Link Inc., Seoul (Korea, Republic of)

    2015-05-15

    In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations.

  4. Monte Carlo based treatment planning systems for Boron Neutron Capture Therapy in Petten, The Netherlands

    Science.gov (United States)

    Nievaart, V. A.; Daquino, G. G.; Moss, R. L.

    2007-06-01

    Boron Neutron Capture Therapy (BNCT) is a bimodal form of radiotherapy for the treatment of tumour lesions. Since the cancer cells in the treatment volume are targeted with 10B, a higher dose is given to these cancer cells due to the 10B(n,α)7Li reaction, in comparison with the surrounding healthy cells. In Petten (The Netherlands), at the High Flux Reactor, a specially tailored neutron beam has been designed and installed. Over 30 patients have been treated with BNCT in 2 clinical protocols: a phase I study for the treatment of glioblastoma multiforme and a phase II study on the treatment of malignant melanoma. Furthermore, activities concerning the extra-corporal treatment of metastasis in the liver (from colorectal cancer) are in progress. The irradiation beam at the HFR contains both neutrons and gammas that, together with the complex geometries of both patient and beam set-up, demands for very detailed treatment planning calculations. A well designed Treatment Planning System (TPS) should obey the following general scheme: (1) a pre-processing phase (CT and/or MRI scans to create the geometric solid model, cross-section files for neutrons and/or gammas); (2) calculations (3D radiation transport, estimation of neutron and gamma fluences, macroscopic and microscopic dose); (3) post-processing phase (displaying of the results, iso-doses and -fluences). Treatment planning in BNCT is performed making use of Monte Carlo codes incorporated in a framework, which includes also the pre- and post-processing phases. In particular, the glioblastoma multiforme protocol used BNCT_rtpe, while the melanoma metastases protocol uses NCTPlan. In addition, an ad hoc Positron Emission Tomography (PET) based treatment planning system (BDTPS) has been implemented in order to integrate the real macroscopic boron distribution obtained from PET scanning. BDTPS is patented and uses MCNP as the calculation engine. The precision obtained by the Monte Carlo based TPSs exploited at Petten

  5. Network coding and its applications to satellite systems

    DEFF Research Database (Denmark)

    Vieira, Fausto; Roetter, Daniel Enrique Lucani

    2015-01-01

    Network coding has its roots in information theory where it was initially proposed as a way to improve a two-node communication using a (broadcasting) relay. For this theoretical construct, a satellite communications system was proposed as an illustrative example, where the relay node would...... be a satellite covering the two nodes. The benefits in terms of throughput, resilience, and flexibility of network coding are quite relevant for wireless networks in general, and for satellite systems in particular. This chapter presents some of the basics in network coding, as well as an overview of specific...... scenarios where network coding provides a significant improvement compared to existing solutions, for example, in broadcast and multicast satellite networks, hybrid satellite-terrestrial networks, and broadband multibeam satellites. The chapter also compares coding perspectives and revisits the layered...

  6. Meaningful timescales from Monte Carlo simulations of particle systems with hard-core interactions

    Science.gov (United States)

    Costa, Liborio I.

    2016-12-01

    A new Markov Chain Monte Carlo method for simulating the dynamics of particle systems characterized by hard-core interactions is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.

  7. MULTIPLE TRELLIS CODED ORTHOGONAL TRANSMIT SCHEME FOR MULTIPLE ANTENNA SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, a novel multiple trellis coded orthogonal transmit scheme is proposed to exploit transmit diversity in fading channels. In this scheme, a unique vector from a set of orthogonal vectors is assigned to each transmit antenna. Each of the output symbols from the multiple trellis encoder is multiplied with one of these orthogonal vectors and transmitted from corresponding transmit antennas. By correlating with corresponding orthogonal vectors, the receiver separates symbols transmitted from different transmit antennas.This scheme can be adopted in coherent/differential systems with any number of transmit antennas. It is shown that the proposed scheme encompasses the conventional trellis coded unitary space-time modulation based on the optimal cyclic group codes as a special case. We also propose two better designs over the conventional trellis coded unitary space-time modulation. The first design uses 8 Phase Shift Keying (8-PSK) constellations instead of 16 Phase Shift Keying (16-PSK) constellations in the conventional trellis coded unitary space-time modulation. As a result, the product distance of this new design is much larger than that of the conventional trellis coded unitary space-time modulation. The second design introduces constellations with multiple levels of amplitudes into the design of the multiple trellis coded orthogonal transmit scheme. For both designs, simulations show that multiple trellis coded orthogonal transmit schemes can achieve better performance than the conventional trellis coded unitary space-time schemes.

  8. Code Design and Shuffled Iterative Decoding of a Quasi-Cyclic LDPC Coded OFDM System

    Institute of Scientific and Technical Information of China (English)

    LIU Binbin; BAI Dong; GE Qihong; MEI Shunliang

    2009-01-01

    In multipath environments,the error rate performance of orthogonal frequency division multiplexing (OFDM) is severely degraded by the deep fading subcarriers.Powerful error-correcting codes must be used with OFDM.This paper presents a quasi-cyclic low-density parity-check (LDPC) coded OFDM system,in which the redundant bits of each codeword are mapped to a higher-order modulation constellation.The optimal degree distribution was calculated using density evolution.The corresponding quasi-cyclic LDPC code was then constructed using circulant permutation matrices.Group shuffled message passing scheduling was used in the iterative decoding.Simulation results show that the system achieves better error rate performance and faster decoding convergence than conventional approaches on both additive white Gaussian noise (AWGN) and Rayleigh fading channels.

  9. Codes, standards, and PV power systems. A 1996 status report

    Energy Technology Data Exchange (ETDEWEB)

    Wiles, J

    1996-06-01

    As photovoltaic (PV) electrical power systems gain increasing acceptance for both off-grid and utility-interactive applications, the safety, durability, and performance of these systems gains in importance. Local and state jurisdictions in many areas of the country require that all electrical power systems be installed in compliance with the requirements of the National Electrical Code{reg_sign} (NEC{reg_sign}). Utilities and governmental agencies are now requiring that PV installations and components also meet a number of Institute of Electrical and Electronic Engineers (IEEE) standards. PV installers are working more closely with licensed electricians and electrical contractors who are familiar with existing local codes and installation practices. PV manufacturers, utilities, balance of systems manufacturers, and standards representatives have come together to address safety and code related issues for future PV installations. This paper addresses why compliance with the accepted codes and standards is needed and how it is being achieved.

  10. ARC Code TI: Optimal Alarm System Design and Implementation

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system can robustly predict a level-crossing event that is specified over a fixed prediction horizon. The code contained in this packages provides...

  11. Code-modulated interferometric imaging system using phased arrays

    Science.gov (United States)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  12. Study on New Concatenated Code in WDM Optical Transmission Systems

    Institute of Scientific and Technical Information of China (English)

    YUAN Jian-guo; JIANG Ze; MAO You-ju; YE Wen-wei

    2007-01-01

    A new concatenated code of RS(255,239)+BCH(2 040,1 930) code to be suitable for WDM optical transmission systems is proposed.The simulation results show that this new concatenated code,compared with the RS(255,239)+CSOC(k0/n0=6/7,J=8) code in ITU-T G.75.1,has a lower redundancy and better error-correction performance,furthermore,its net coding gain(NCG) is respectively 0.46 dB,0.43 dB more than that of RS(255,239)+CSOC(k0/n0 =6/7,J=8) code and BCH(3 860,3 824)+BCH(2 040,1 930) code in ITU-T G.75.1 at the third iteration for the bit error rate(BER) of 10-12.Therefore,the new super forward error correction(Super-FEC) concatenated code can be better used in ultra long-haul,ultra large-capacity and ultra high-speed WDM optical communication systems.

  13. Differences among Monte Carlo codes in the calculations of voxel S values for radionuclide targeted therapy and analysis of their impact on absorbed dose evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Pacilio, M.; Lanconelli, N.; Lo Meo, S.; Betti, M.; Montani, L.; Torres Aroche, L. A.; Coca Perez, M. A. [Department of Medical Physics, Azienda Ospedaliera S. Camillo Forlanini, Piazza Forlanini 1, Rome 00151 (Italy); Department of Physics, Alma Mater Studiorum University of Bologna, Viale Berti-Pichat 6/2, Bologna 40127 (Italy); Department of Medical Physics, Azienda Ospedaliera S. Camillo Forlanini, Piazza Forlanini 1, Rome 00151 (Italy); Department of Medical Physics, Azienda Ospedaliera Sant' Andrea, Via di Grotarossa 1035, Rome 00189 (Italy); Department of Medical Physics, Center for Clinical Researches, Calle 34 North 4501, Havana 11300 (Cuba)

    2009-05-15

    Several updated Monte Carlo (MC) codes are available to perform calculations of voxel S values for radionuclide targeted therapy. The aim of this work is to analyze the differences in the calculations obtained by different MC codes and their impact on absorbed dose evaluations performed by voxel dosimetry. Voxel S values for monoenergetic sources (electrons and photons) and different radionuclides ({sup 90}Y, {sup 131}I, and {sup 188}Re) were calculated. Simulations were performed in soft tissue. Three general-purpose MC codes were employed for simulating radiation transport: MCNP4C, EGSnrc, and GEANT4. The data published by the MIRD Committee in Pamphlet No. 17, obtained with the EGS4 MC code, were also included in the comparisons. The impact of the differences (in terms of voxel S values) among the MC codes was also studied by convolution calculations of the absorbed dose in a volume of interest. For uniform activity distribution of a given radionuclide, dose calculations were performed on spherical and elliptical volumes, varying the mass from 1 to 500 g. For simulations with monochromatic sources, differences for self-irradiation voxel S values were mostly confined within 10% for both photons and electrons, but with electron energy less than 500 keV, the voxel S values referred to the first neighbor voxels showed large differences (up to 130%, with respect to EGSnrc) among the updated MC codes. For radionuclide simulations, noticeable differences arose in voxel S values, especially in the bremsstrahlung tails, or when a high contribution from electrons with energy of less than 500 keV is involved. In particular, for {sup 90}Y the updated codes showed a remarkable divergence in the bremsstrahlung region (up to about 90% in terms of voxel S values) with respect to the EGS4 code. Further, variations were observed up to about 30%, for small source-target voxel distances, when low-energy electrons cover an important part of the emission spectrum of the radionuclide

  14. Quasi-Monte Carlo methods for lattice systems. A first look

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik

    2013-02-15

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  15. Clinical implementation of the Peregrine Monte Carlo dose calculations system for photon beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D

    1999-07-01

    PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.

  16. JEMs and incompatible occupational coding systems: Effect of manual and automatic recoding of job codes on exposure assignment

    NARCIS (Netherlands)

    Koeman, T.; Offermans, N.S.M.; Christopher-De Vries, Y.; Slottje, P.; Brandt, P.A. van den; Goldbohm, R.A.; Kromhout, H.; Vermeulen, R.

    2013-01-01

    Background: In epidemiological studies, occupational exposure estimates are often assigned through linkage of job histories to job-exposure matrices (JEMs). However, available JEMs may have a coding system incompatible with the coding system used to code the job histories, necessitating a translatio

  17. Comparison of Fuel Temperature Coefficients of PWR UO{sub 2} Fuel from Monte Carlo Codes (MCNP6.1 and KENO6)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung-O; Roh, Gyuhong; Lee, Byungchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    As a result, there was a difference within about 300-400 pcm between keff values at each enrichment due to the difference of codes and nuclear data used in the evaluations. The FTC was changed to be less negative with the increase of uranium enrichment, and it followed the form of asymptotic curve. However, it is necessary to perform additional study for investigating what factor causes the differences more than two standard deviation (2σ) among the FTCs at partial enrichment region. The interaction probability of incident neutron with nuclear fuel is depended on the relative velocity between the neutron and the target nuclei. The Fuel Temperature Coefficient (FTC) is defined as the change of Doppler effect with respect to the change in fuel temperature without any other change such as moderator temperature, moderator density, etc. In this study, the FTCs for UO{sub 2} fuel were evaluated by using MCNP6.1 and KENO6 codes based on a Monte Carlo method. In addition, the latest neutron cross-sections (ENDF/B-VI and VII) were applied to analyze the effect of these data on the evaluation of FTC, and nuclear data used in MCNP calculations were generated from the makxsf code. An evaluation of the Doppler effect and FTC for UO{sub 2} fuel widely used in PWR was conducted using MCNP6.1 and KENO6 codes. The ENDF/B-VI and VII were also applied to analyze what effect these data has on those evaluations. All cross-sections needed for MCNP calculation were produced using makxsf code. The calculation models used in the evaluations were based on the typical PWR UO{sub 2} lattice.

  18. Treatment of patient-dependent beam modifiers in photon treatments by the Monte Carlo dose calculation code PEREGRINE

    Energy Technology Data Exchange (ETDEWEB)

    Schach von Wittenau, A.E.; Cox, L.J.; Bergstrom, P.M. Jr.; Hornstein, S.M. [Lawrence Livermore National Lab., CA (United States); Mohan, R.; Libby, B.; Wu, Q. [Medical Coll. of Virginia, Richmond, VA (United States); Lovelock, D.M.J. [Memorial Sloan-Kettering Cancer Center, New York, NY (United States)

    1997-03-01

    The goal of the PEREGRINE Monte Carlo Dose Calculation Project is to deliver a Monte Carlo package that is both accurate and sufficiently fast for routine clinical use. One of the operational requirements for photon-treatment plans is a fast, accurate method of describing the photon phase-space distribution at the surface of the patient. The open-field case is computationally the most tractable; we know, a priori, for a given machine and energy, the locations and compositions of the relevant accelerator components (i.e., target, primary collimator, flattening filter, and monitor chamber). Therefore, we can precalculate and store the expected photon distributions. For any open-field treatment plan, we then evaluate these existing photon phase-space distributions at the patient`s surface, and pass the obtained photons to the dose calculation routines within PEREGRINE. We neglect any effect of the intervening air column, including attenuation of the photons and production of contaminant electrons. In principle, for treatment plans requiring jaws, blocks, and wedges, we could precalculate and store photon phase-space distributions for various combinations of field sizes and wedges. This has the disadvantage that we would have to anticipate those combinations and that subsequently PEREGRINE would not be able to treat other plans. Therefore, PEREGRINE tracks photons through the patient-dependent beam modifiers. The geometric and physics methods used to do this are described here. 4 refs., 8 figs.

  19. Development and implementation in the Monte Carlo code PENELOPE of a new virtual source model for radiotherapy photon beams and portal image calculation

    Science.gov (United States)

    Chabert, I.; Barat, E.; Dautremer, T.; Montagu, T.; Agelou, M.; Croc de Suray, A.; Garcia-Hernandez, J. C.; Gempp, S.; Benkreira, M.; de Carlan, L.; Lazaro, D.

    2016-07-01

    This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.

  20. Development and implementation in the Monte Carlo code PENELOPE of a new virtual source model for radiotherapy photon beams and portal image calculation.

    Science.gov (United States)

    Chabert, I; Barat, E; Dautremer, T; Montagu, T; Agelou, M; Croc de Suray, A; Garcia-Hernandez, J C; Gempp, S; Benkreira, M; de Carlan, L; Lazaro, D

    2016-07-21

    This work aims at developing a generic virtual source model (VSM) preserving all existing correlations between variables stored in a Monte Carlo pre-computed phase space (PS) file, for dose calculation and high-resolution portal image prediction. The reference PS file was calculated using the PENELOPE code, after the flattening filter (FF) of an Elekta Synergy 6 MV photon beam. Each particle was represented in a mobile coordinate system by its radial position (r s ) in the PS plane, its energy (E), and its polar and azimuthal angles (φ d and θ d ), describing the particle deviation compared to its initial direction after bremsstrahlung, and the deviation orientation. Three sub-sources were created by sorting out particles according to their last interaction location (target, primary collimator or FF). For each sub-source, 4D correlated-histograms were built by storing E, r s , φ d and θ d values. Five different adaptive binning schemes were studied to construct 4D histograms of the VSMs, to ensure histogram efficient handling as well as an accurate reproduction of E, r s , φ d and θ d distribution details. The five resulting VSMs were then implemented in PENELOPE. Their accuracy was first assessed in the PS plane, by comparing E, r s , φ d and θ d distributions with those obtained from the reference PS file. Second, dose distributions computed in water, using the VSMs and the reference PS file located below the FF, and also after collimation in both water and heterogeneous phantom, were compared using a 1.5%-0 mm and a 2%-0 mm global gamma index, respectively. Finally, portal images were calculated without and with phantoms in the beam. The model was then evaluated using a 1%-0 mm global gamma index. Performance of a mono-source VSM was also investigated and led, as with the multi-source model, to excellent results when combined with an adaptive binning scheme.

  1. On the Performance of Code Acquisition in MIMO CDMA Systems

    Science.gov (United States)

    Kim, Sangchoon; An, Jinyoung

    This letter investigates the effects of using multiple transmit antennas on code acquisition for preamble search in the CDMA uplink when MIMO is used for signal transmission and reception. The performance of a ML code acquisition technique in the presence of MIMO channel is analyzed by considering the detection and miss probabilities. The acquisition performance is numerically evaluated on a frequency selective fading channel. It is found that the performance of code acquisition scheme for a SIMO system is better than that for the case of MIMO on the low thresholds in terms of detection performance and MAT.

  2. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  3. Nonterminals and codings in defining variations of OL-systems

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The use of nonterminals versus the use of codings in variations of OL-systems is studied. It is shown that the use of nonterminals produces a comparatively low generative capacity in deterministic systems while it produces a comparatively high generative capacity in nondeterministic systems....... Finally it is proved that the family of context-free languages is contained in the family generated by codings on propagating OL-systems with a finite set of axioms, which was one of the open problems in [10]. All the results in this paper can be found in [71] and [72]....

  4. An extensive Markov system for ECG exact coding.

    Science.gov (United States)

    Tai, S C

    1995-02-01

    In this paper, an extensive Markov process, which considers both the coding redundancy and the intersample redundancy, is presented to measure the entropy value of an ECG signal more accurately. It utilizes the intersample correlations by predicting the incoming n samples based on the previous m samples which constitute an extensive Markov process state. Theories of the extensive Markov process and conventional n repeated applications of m-th order Markov process are studied first in this paper. After that, they are realized for ECG exact coding. Results show that a better performance can be achieved by our system. The average code length for the extensive Markov system on the second difference signals was 2.512 b/sample, while the average Huffman code length for the second difference signals was 3.326 b/sample.

  5. A review of the use and potential of the GATE Monte Carlo simulation code for radiation therapy and dosimetry applications.

    Science.gov (United States)

    Sarrut, David; Bardiès, Manuel; Boussion, Nicolas; Freud, Nicolas; Jan, Sébastien; Létang, Jean-Michel; Loudos, George; Maigne, Lydia; Marcatili, Sara; Mauxion, Thibault; Papadimitroulas, Panagiotis; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; Schaart, Dennis R; Visvikis, Dimitris; Buvat, Irène

    2014-06-01

    In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same framework is emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.

  6. A review of the use and potential of the GATE Monte Carlo simulation code for radiation therapy and dosimetry applications

    Energy Technology Data Exchange (ETDEWEB)

    Sarrut, David, E-mail: david.sarrut@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon (France); Université Lyon 1 (France); Centre Léon Bérard (France); Bardiès, Manuel; Marcatili, Sara; Mauxion, Thibault [Inserm, UMR1037 CRCT, F-31000 Toulouse, France and Université Toulouse III-Paul Sabatier, UMR1037 CRCT, F-31000 Toulouse (France); Boussion, Nicolas [INSERM, UMR 1101, LaTIM, CHU Morvan, 29609 Brest (France); Freud, Nicolas; Létang, Jean-Michel [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, 69008 Lyon (France); Jan, Sébastien [CEA/DSV/I2BM/SHFJ, Orsay 91401 (France); Loudos, George [Department of Medical Instruments Technology, Technological Educational Institute of Athens, Athens 12210 (Greece); Maigne, Lydia; Perrot, Yann [UMR 6533 CNRS/IN2P3, Université Blaise Pascal, 63171 Aubière (France); Papadimitroulas, Panagiotis [Department of Biomedical Engineering, Technological Educational Institute of Athens, 12210, Athens (Greece); Pietrzyk, Uwe [Institut für Neurowissenschaften und Medizin, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany and Fachbereich für Mathematik und Naturwissenschaften, Bergische Universität Wuppertal, 42097 Wuppertal (Germany); Robert, Charlotte [IMNC, UMR 8165 CNRS, Universités Paris 7 et Paris 11, Orsay 91406 (France); and others

    2014-06-15

    In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.

  7. Physics study of microbeam radiation therapy with PSI-version of Monte Carlo code GEANT as a new computational tool

    CERN Document Server

    Stepanek, J; Laissue, J A; Lyubimova, N; Di Michiel, F; Slatkin, D N

    2000-01-01

    Microbeam radiation therapy (MRT) is a currently experimental method of radiotherapy which is mediated by an array of parallel microbeams of synchrotron-wiggler-generated X-rays. Suitably selected, nominally supralethal doses of X-rays delivered to parallel microslices of tumor-bearing tissues in rats can be either palliative or curative while causing little or no serious damage to contiguous normal tissues. Although the pathogenesis of MRT-mediated tumor regression is not understood, as in all radiotherapy such understanding will be based ultimately on our understanding of the relationships among the following three factors: (1) microdosimetry, (2) damage to normal tissues, and (3) therapeutic efficacy. Although physical microdosimetry is feasible, published information on MRT microdosimetry to date is computational. This report describes Monte Carlo-based computational MRT microdosimetry using photon and/or electron scattering and photoionization cross-section data in the 1 e V through 100 GeV range distrib...

  8. Advances in conformal radiotherapy using Monte Carlo Code to design new IMRT and IORT accelerators and interpret CT numbers

    CERN Document Server

    Wysocka-Rabin, A

    2013-01-01

    The introductory chapter of this monograph, which follows this Preface, provides an overview of radiotherapy and treatment planning. The main chapters that follow describe in detail three significant aspects of radiotherapy on which the author has focused her research efforts. Chapter 2 presents studies the author worked on at the German National Cancer Institute (DKFZ) in Heidelberg. These studies applied the Monte Carlo technique to investigate the feasibility of performing Intensity Modulated Radiotherapy (IMRT) by scanning with a narrow photon beam. This approach represents an alternative to techniques that generate beam modulation by absorption, such as MLC, individually-manufactured compensators, and special tomotherapy modulators. The technical realization of this concept required investigation of the influence of various design parameters on the final small photon beam. The photon beam to be scanned should have a diameter of approximately 5 mm at Source Surface Distance (SSD) distance, and the penumbr...

  9. Computer program uses Monte Carlo techniques for statistical system performance analysis

    Science.gov (United States)

    Wohl, D. P.

    1967-01-01

    Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.

  10. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    well as the first-ever PLC (programmable logic controller) rootkit1 hiding the STL code. It also included zero-day vulnerabilities,2 spread via USB...multiple technologies , including those listed above, the X Window System, Motif, Digital Video Broadcasting Multimedia Home Platform, Secure Elec...Introduction 1 1.1 Software Security 1 1.2 SCALe 2 1.3 Conformance Assessment 3 1.4 CERT Secure Coding Standards 4 1.5 Automated Analysis Tools

  11. The FORTRAN static source code analyzer program (SAP) system description

    Science.gov (United States)

    Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.

    1982-01-01

    A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.

  12. Modular ORIGEN-S for multi-physics code systems

    Energy Technology Data Exchange (ETDEWEB)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C., E-mail: yesilyurtg@ornl.gov, E-mail: clarnokt@ornl.gov, E-mail: gauldi@ornl.gov [Oak Ridge National Laboratory, TN (United States); Galloway, Jack, E-mail: jack@galloways.net [Los Alamos National Laboratory, Los Alamos, NM (United States)

    2011-07-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  13. Continuous-Energy Adjoint Flux and Perturbation Calculation using the Iterated Fission Probability Method in Monte Carlo Code TRIPOLI-4® and Underlying Applications

    Science.gov (United States)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.

    2014-06-01

    Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high precision. Other applications of

  14. The Premar Code for the Monte Carlo Simulation of Radiation Transport In the Atmosphere; Il codice PREMAR per la simulazione Montecarlo del trasporto della radiazione dell`atmosfera

    Energy Technology Data Exchange (ETDEWEB)

    Cupini, E. [ENEA, Centro Ricerche `Ezio Clementel`, Bologna (Italy). Dipt. Innovazione; Borgia, M.G. [ENEA, Centro Ricerche `Ezio Clementel`, Bologna (Italy). Dipt. Energia; Premuda, M. [Consiglio Nazionale delle Ricerche, Bologna (Italy). Ist. FISBAT

    1997-03-01

    The Montecarlo code PREMAR is described, which allows the user to simulate the radiation transport in the atmosphere, in the ultraviolet-infrared frequency interval. A plan multilayer geometry is at present foreseen by the code, witch albedo possibility at the lower boundary surface. For a given monochromatic point source, the main quantities computed by the code are the absorption spatial distributions of aerosol and molecules, together with the related atmospheric transmittances. Moreover, simulation of of Lidar experiments are foreseen by the code, the source and telescope fields of view being assigned. To build-up the appropriate probability distributions, an input data library is assumed to be read by the code. For this purpose the radiance-transmittance LOWTRAN-7 code has been conveniently adapted as a source of the library so as to exploit the richness of information of the code for a large variety of atmospheric simulations. Results of applications of the PREMAR code are finally presented, with special reference to simulations of Lidar system and radiometer experiments carried out at the Brasimone ENEA Centre by the Environment Department.

  15. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    Energy Technology Data Exchange (ETDEWEB)

    Langenbuch, S.; Velkov, K. [GRS, Garching (Germany); Lizorkin, M. [Kurchatov-Institute, Moscow (Russian Federation)] [and others

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  16. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    B P Kochurov; A P Knyazev; A Yu Kwaretzkheli

    2007-02-01

    Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.

  17. Fault Risk Assessment of Underwater Vehicle Steering System Based on Virtual Prototyping and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    He Deyu

    2016-09-01

    Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.

  18. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  19. Acceptance and implementation of a system of planning computerized based on Monte Carlo; Aceptacion y puesta en marcha de un sistema de planificacion comutarizada basado en Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.

    2013-07-01

    It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)

  20. Code conversion for system design and safety analysis of NSSS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hae Cho; Kim, Young Tae; Choi, Young Gil; Kim, Hee Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-01-01

    This report describes overall project works related to conversion, installation and validation of computer codes which are used in NSSS design and safety analysis of nuclear power plants. Domain/os computer codes for system safety analysis are installed and validated on Apollo DN10000, and then Apollo version are converted and installed again on HP9000/700 series with appropriate validation. Also, COOLII and COAST which are cyber version computer codes are converted into versions of Apollo DN10000 and HP9000/700, and installed with validation. This report details whole processes of work involved in the computer code conversion and installation, as well as software verification and validation results which are attached to this report. 12 refs., 8 figs. (author)

  1. Physical-layer network coding in coherent optical OFDM systems.

    Science.gov (United States)

    Guan, Xun; Chan, Chun-Kit

    2015-04-20

    We present the first experimental demonstration and characterization of the application of optical physical-layer network coding in coherent optical OFDM systems. It combines two optical OFDM frames to share the same link so as to enhance system throughput, while individual OFDM frames can be recovered with digital signal processing at the destined node.

  2. The CMS Monte Carlo Production System: Development and Design

    Energy Technology Data Exchange (ETDEWEB)

    Evans, D. [Fermi National Accelerator Laboratory, Batavia, IL (United States)], E-mail: evansde@fnal.gov; Fanfani, A. [Universita degli Studi di Bologna and INFN Sezione di Bologna, Bologna (Italy); Kavka, C. [INFN Sezione di Trieste, Trieste (Italy); Lingen, F. van [California Institute of Technology, Pasadena, CA (United States); Eulisse, G. [Northeastern University, Boston, MA (United States); Bacchi, W.; Codispoti, G. [Universita degli Studi di Bologna and INFN Sezione di Bologna, Bologna (Italy); Mason, D. [Fermi National Accelerator Laboratory, Batavia, IL (United States); De Filippis, N. [INFN Sezione di Bari, Bari (Italy); Hernandez, J.M. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Madrid (Spain); Elmer, P. [Princeton University, Princeton, NJ (United States)

    2008-03-15

    The CMS production system has undergone a major architectural upgrade from its predecessor, with the goal of reducing the operational manpower needed and preparing for the large scale production required by the CMS physics plan. The new production system is a tiered architecture that facilitates robust and distributed production request processing and takes advantage of the multiple Grid and farm resources available to the CMS experiment.

  3. The CMS Monte Carlo Production System Development and Design

    CERN Document Server

    Evans, D; Kavka, C; Van Lingen, F; Eulisse, G; Bacchi, W; Codispoti, G; Mason, D; De Filippis, N; Hernandez J M; Elmer, P

    2008-01-01

    The CMS production system has undergone a major architectural upgrade from its predecessor, with the goal of reducing the operational manpower needed and preparing for the large scale production required by the CMS physics plan. The new production system is a tiered architecture that facilitates robust and distributed production request processing and takes advantage of the multiple Grid and farm resources available to the CMS experiment.

  4. Internal dosimetry with the Monte Carlo code GATE: validation using the ICRP/ICRU female reference computational model

    Science.gov (United States)

    Villoing, Daphnée; Marcatili, Sara; Garcia, Marie-Paule; Bardiès, Manuel

    2017-03-01

    The purpose of this work was to validate GATE-based clinical scale absorbed dose calculations in nuclear medicine dosimetry. GATE (version 6.2) and MCNPX (version 2.7.a) were used to derive dosimetric parameters (absorbed fractions, specific absorbed fractions and S-values) for the reference female computational model proposed by the International Commission on Radiological Protection in ICRP report 110. Monoenergetic photons and electrons (from 50 keV to 2 MeV) and four isotopes currently used in nuclear medicine (fluorine-18, lutetium-177, iodine-131 and yttrium-90) were investigated. Absorbed fractions, specific absorbed fractions and S-values were generated with GATE and MCNPX for 12 regions of interest in the ICRP 110 female computational model, thereby leading to 144 source/target pair configurations. Relative differences between GATE and MCNPX obtained in specific configurations (self-irradiation or cross-irradiation) are presented. Relative differences in absorbed fractions, specific absorbed fractions or S-values are below 10%, and in most cases less than 5%. Dosimetric results generated with GATE for the 12 volumes of interest are available as supplemental data. GATE can be safely used for radiopharmaceutical dosimetry at the clinical scale. This makes GATE a viable option for Monte Carlo modelling of both imaging and absorbed dose in nuclear medicine.

  5. AEOLUS: A MARKOV CHAIN MONTE CARLO CODE FOR MAPPING ULTRACOOL ATMOSPHERES. AN APPLICATION ON JUPITER AND BROWN DWARF HST LIGHT CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Karalidi, Theodora; Apai, Dániel; Schneider, Glenn; Hanson, Jake R. [Steward Observatory, Department of Astronomy, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Pasachoff, Jay M., E-mail: tkaralidi@email.arizona.edu [Hopkins Observatory, Williams College, 33 Lab Campus Drive, Williamstown, MA 01267 (United States)

    2015-11-20

    Deducing the cloud cover and its temporal evolution from the observed planetary spectra and phase curves can give us major insight into the atmospheric dynamics. In this paper, we present Aeolus, a Markov chain Monte Carlo code that maps the structure of brown dwarf and other ultracool atmospheres. We validated Aeolus on a set of unique Jupiter Hubble Space Telescope (HST) light curves. Aeolus accurately retrieves the properties of the major features of the Jovian atmosphere, such as the Great Red Spot and a major 5 μm hot spot. Aeolus is the first mapping code validated on actual observations of a giant planet over a full rotational period. For this study, we applied Aeolus to J- and H-band HST light curves of 2MASS J21392676+0220226 and 2MASS J0136565+093347. Aeolus retrieves three spots at the top of the atmosphere (per observational wavelength) of these two brown dwarfs, with a surface coverage of 21% ± 3% and 20.3% ± 1.5%, respectively. The Jupiter HST light curves will be publicly available via ADS/VIZIR.

  6. Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0

    Energy Technology Data Exchange (ETDEWEB)

    Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.

    1996-10-01

    Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.

  7. Non-thermodynamic approach to including bombardment-induced post-cascade redistribution of point defects in dynamic Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Ignatova, V.A. E-mail: velislav@uia.ua.ac.be; Chakarov, I.R.; Katardjiev, I.V

    2003-04-01

    The redistribution of the elements as a result of atomic relocations produced by the ions and the recoils due to the ballistic and transport processes is investigated by making use of a dynamic Monte Carlo code. Phenomena, such as radiation-enhanced diffusion (RED) and bombardment-induced segregation (BIS) triggered by the ion bombardment may also contribute to the migration of atoms within the target. In order to include both RED and BIS in the code, we suggest an approach which is considered as an extension of the binary collision approximation, i.e. it takes place 'simultaneously' with the cascade and acts as a correction to the particle redistribution for low energies. Both RED and BIS models are based on the common approach to treat the transport processes as a result of a random migration of point defects (vacancies and interstitials) according to a probability given by a pre-defined Gaussian. The models are tested and the influence of the diffusion and segregation is illustrated in the cases of 12 keV {sup 121}Sb{sup +} implantation at low fluence in SiO{sub 2}/Si substrate and of self-sputtering of Ga{sup +} ions during profiling of SiO{sub 2}/Si interfaces.

  8. Non-thermodynamic approach to including bombardment-induced post-cascade redistribution of point defects in dynamic Monte Carlo code

    CERN Document Server

    Ignatova, V A; Katardjiev, I V

    2003-01-01

    The redistribution of the elements as a result of atomic relocations produced by the ions and the recoils due to the ballistic and transport processes is investigated by making use of a dynamic Monte Carlo code. Phenomena, such as radiation-enhanced diffusion (RED) and bombardment-induced segregation (BIS) triggered by the ion bombardment may also contribute to the migration of atoms within the target. In order to include both RED and BIS in the code, we suggest an approach which is considered as an extension of the binary collision approximation, i.e. it takes place 'simultaneously' with the cascade and acts as a correction to the particle redistribution for low energies. Both RED and BIS models are based on the common approach to treat the transport processes as a result of a random migration of point defects (vacancies and interstitials) according to a probability given by a pre-defined Gaussian. The models are tested and the influence of the diffusion and segregation is illustrated in the cases of 12 keV ...

  9. Investigation of behavior of scintillator detector of Alborz observatory array using Monte Carlo method with Geant4 code

    Directory of Open Access Journals (Sweden)

    M. Abbasian Motlagh

    2014-04-01

    Full Text Available For their appropriate temporal resolution, scintillator detectors are used in the Alborz observatory. In this work, the behavior of the scintillation detectors for the passage of electrons with different energies and directions were studied using the simulation code GEANT4. Pulse shapes of scintillation light, and such characteristics as the total number of photons, the rise time and the falling time for the optical pulses were computed for the passage of electrons with energies of 10, 100 and 1000 MeV. Variations of the characteristics of optical pulse of scintillation with incident angle and the location of electrons were also investigated

  10. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    White, Morgan C. [Univ. of Florida, Gainesville, FL (United States)

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second

  11. A Markov Chain Monte Carlo Based Method for System Identification

    Energy Technology Data Exchange (ETDEWEB)

    Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G

    2002-10-22

    This paper describes a novel methodology for the identification of mechanical systems and structures from vibration response measurements. It combines prior information, observational data and predictive finite element models to produce configurations and system parameter values that are most consistent with the available data and model. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. The resulting process enables the estimation of distributions of both individual parameters and system-wide states. Attractive features of this approach include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate; (2) function effectively when exposed to degraded conditions including: noisy data, incomplete data sets and model misspecification; (3) allow alternative estimates to be produced and compared, and (4) incrementally update initial estimates and analysis as more data becomes available. A series of test cases based on a simple fixed-free cantilever beam is presented. These results demonstrate that the algorithm is able to identify the system, based on the stiffness matrix, given applied force and resultant nodal displacements. Moreover, it effectively identifies locations on the beam where damage (represented by a change in elastic modulus) was specified.

  12. MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Gabriela Ižaríková

    2015-12-01

    Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.

  13. Construction of a computational exposure model for dosimetric calculations using the EGS4 Monte Carlo code and voxel phantoms; Construcao de um modelo computacional de exposicao para calculos dosimetricos utilizando o codigo Monte Carlo EGS4 e fantomas de voxels

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose Wilson

    2004-07-15

    The MAX phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. In computational dosimetry, MAX can simulate the geometry of a human body under exposure to ionizing radiations, internal or external, with the objective of calculating the equivalent dose in organs and tissues for occupational, medical or environmental purposes of the radiation protection. This study presents a methodology used to build a new computational exposure model MAX/EGS4: the geometric construction of the phantom; the development of the algorithm of one-directional, divergent, and isotropic radioactive sources; new methods for calculating the equivalent dose in the red bone marrow and in the skin, and the coupling of the MAX phantom with the EGS4 Monte Carlo code. Finally, some results of radiation protection, in the form of conversion coefficients between equivalent dose (or effective dose) and free air-kerma for external photon irradiation are presented and discussed. Comparing the results presented with similar data from other human phantoms it is possible to conclude that the coupling MAX/EGS4 is satisfactory for the calculation of the equivalent dose in radiation protection. (author)

  14. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons; Validacao do acoplamento de modelos mesh ao codigo Monte Carlo GEANT4 para simulacao de fontes de fotons internas

    Energy Technology Data Exchange (ETDEWEB)

    Caribe, Paulo Rauli Rafeson Vasconcelos, E-mail: raulycaribe@hotmail.com [Universidade Federal Rural de Pernambuco (UFRPE), Recife, PE (Brazil). Fac. de Fisica; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear

    2013-07-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV.

  15. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    Science.gov (United States)

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  16. Production of energetic light fragments in extensions of the CEM and LAQGSM event generators of the Monte Carlo transport code MCNP6

    Science.gov (United States)

    Mashnik, Stepan G.; Kerby, Leslie M.; Gudima, Konstantin K.; Sierk, Arnold J.; Bull, Jeffrey S.; James, Michael R.

    2017-03-01

    We extend the cascade-exciton model (CEM), and the Los Alamos version of the quark-gluon string model (LAQGSM), event generators of the Monte Carlo N -particle transport code version 6 (MCNP6), to describe production of energetic light fragments (LF) heavier than 4He from various nuclear reactions induced by particles and nuclei at energies up to about 1 TeV/nucleon. In these models, energetic LF can be produced via Fermi breakup, preequilibrium emission, and coalescence of cascade particles. Initially, we study several variations of the Fermi breakup model and choose the best option for these models. Then, we extend the modified exciton model (MEM) used by these codes to account for a possibility of multiple emission of up to 66 types of particles and LF (up to 28Mg) at the preequilibrium stage of reactions. Then, we expand the coalescence model to allow coalescence of LF from nucleons emitted at the intranuclear cascade stage of reactions and from lighter clusters, up to fragments with mass numbers A ≤7 , in the case of CEM, and A ≤12 , in the case of LAQGSM. Next, we modify MCNP6 to allow calculating and outputting spectra of LF and heavier products with arbitrary mass and charge numbers. The improved version of CEM is implemented into MCNP6. Finally, we test the improved versions of CEM, LAQGSM, and MCNP6 on a variety of measured nuclear reactions. The modified codes give an improved description of energetic LF from particle- and nucleus-induced reactions; showing a good agreement with a variety of available experimental data. They have an improved predictive power compared to the previous versions and can be used as reliable tools in simulating applications involving such types of reactions.

  17. Rateless Space Time Block Code for Massive MIMO Systems

    Directory of Open Access Journals (Sweden)

    Ali H. Alqahtani

    2014-01-01

    Full Text Available This paper presents a rateless space time block code (RSTBC for massive MIMO systems. The paper illustrates the basis of rateless space time codes deployments in massive MIMO transmissions over wireless erasure channels. In such channels, data may be lost or is not decodable at the receiver due to a variety of factors such as channel fading, interference, or antenna element failure. We show that RSTBC guarantees the reliability of the system in such cases, even when the data loss rate is 25% or more. In such a highly lossy channel, the conventional fixed-rate codes fail to perform well, particularly when channel state information is not available at the transmitter. Simulation results are provided to demonstrate the BER performance and the spectral efficiency of the proposed scheme.

  18. Monte Carlo Simulation of Magnetic System in the Tsallis Statistics

    OpenAIRE

    1999-01-01

    We apply the Broad Histogram Method to an Ising system in the context of the recently reformulated Generalized Thermostatistics, and we claim it to be a very efficient simulation tool for this non-extensive statistics. Results are obtained for the nearest-neighbour version of the Ising model for a range of values of the $q$ parameter of Generalized Thermostatistics. We found an evidence that the 2D-Ising model does not undergo phase transitions at finite temperatures except for the extensive ...

  19. Monte Carlo analysis of accelerator-driven systems studies on spallation neutron yield and energy gain

    CERN Document Server

    Hashemi-Nezhad, S R; Westmeier, W; Bamblevski, V P; Krivopustov, M I; Kulakov, B A; Sosnin, A N; Wan, J S; Odoj, R

    2001-01-01

    The neutron yield in the interaction of protons with lead and uranium targets has been studied using the LAHET code system. The dependence of the neutron multiplicity on target dimensions and proton energy has been calculated and the dependence of the energy amplification on the proton energy has been investigated in an accelerator-driven system of a given effective multiplication coefficient. Some of the results are compared with experimental findings and with similar calculations by the DCM/CEM code of Dubna and the FLUKA code system used in CERN. (14 refs).

  20. Dose estimation in space using the Particle and Heavy-Ion Transport code System (PHITS)

    Energy Technology Data Exchange (ETDEWEB)

    Gustafsson, Katarina

    2009-06-15

    The radiation risks in space are well known, but work still needs to be done in order to fully understand the radiation effects on humans and how to minimize the risks especially now when the activity in space is increasing with plans for missions to the Moon and Mars. One goal is to develop transport codes that can estimate the radiation environment and its effects. These would be useful tools for reducing the radiation effects when designing and planning space missions. The Particle and Heavy-Ion Transport code System, PHITS, is a three dimensional Monte Carlo code with great possibilities to perform radiation transport calculations and estimating radiation exposure such as absorbed dose, equivalent dose and dose equivalent. Therefore a benchmarking with experiments performed at the ISS was done and also an estimation of different material's influences on the shielding was made. The simulated results already agree reasonable with the measurements, but can most likely be significantly improved when more realistic shielding geometries will be used. This indicates that PHITS is a useful tool for estimating radiation risks for humans in space and when designing shielding of space crafts

  1. Modeling and commissioning of a Clinac 600 CD by Monte Carlo method using the BEAMnrc and DOSXYZnrc codes

    Energy Technology Data Exchange (ETDEWEB)

    Junior, Reginaldo G., E-mail: reginaldo.junior@ifmg.edu.br [Instituto Federal de Minas Gerais (IFMG), Formiga, MG (Brazil). Departamento de Engenharia Eletrica; Oliveira, Arno H. de; Sousa, Romulo V., E-mail: arnoheeren@gmail.com, E-mail: romuloverdolin@yahoo.com.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear; Mourao, Arnaldo P., E-mail: apratabhz@gmail.com [Centro Federal de Educacao Tecnologica de Minas Gerais, Belo Horizonte, MG (Brazil)

    2015-07-01

    This paper reports the modeling of a linear accelerator Clinac 600 CD with BEAMnrc application, derived from EGSnrc radiation transport code, indicating relevant details of modeling that traditionally involve difficulties imposed on the process. This accelerator was commissioned by the confrontation of experimental dosimetric data with the computer data obtained by DOSXYZnrc application. The information compared in dosimetry process were: field profiles and dose percentage curves obtained in a water phantom with cubic edge of 30 cm. In all comparisons made, the computational data showed satisfactory precision and discrepancies with the experimental data did not exceed 3%, proving the electiveness of the model. Both the accelerator model and the computational dosimetry methodology, revealed the need for adjustments that probably will allow obtaining more accurate data than those obtained in the simulations presented here. These adjustments are mainly associated to improve the resolution of the eld profiles, the voxelization in phantom and optimization of computing time. (author)

  2. Design of Light Multi-layered Shields for Use in Diagnostic Radiology and Nuclear Medicine via MCNP5 Monte Carlo Code

    Directory of Open Access Journals (Sweden)

    Mehdi Zehtabian

    2015-09-01

    Full Text Available Introduction Lead-based shields are the most widely used attenuators in X-ray and gamma ray fields. The heavy weight, toxicity and corrosion of lead have led researchers towards the development of non-lead shields. Materials and Methods The purpose of this study was to design multi-layered shields for protection against X-rays and gamma rays in diagnostic radiology and nuclear medicine. In this study, cubic slabs composed of several materials with high atomic numbers, i.e., lead, barium, bismuth, gadolinium, tin and tungsten, were simulated, using MCNP5 Monte Carlo code. Cubic slabs (30×30×0.05 cm3 were simulated at a 50 cm distance from the point photon source. The X-ray spectra of 80 kVp and 120 kVp were obtained, using IPEM Report 78. The photon flux following the use of each shield was obtained inside cubic tally cells (1×1×0.5 cm3 at a 5 cm distance from the shields. The photon attenuation properties of multi-layered shields (i.e., two, three, four and five layers, composed of non-lead radiation materials, were also obtained via Monte Carlo simulations. Results Among different shield designs proposed in this study, the three-layered shield, composed of tungsten, bismuth and gadolinium, showed the most significant attenuation properties in radiology, with acceptable shielding at 140 keV energy in nuclear medicine. Conclusion According to the results, materials with k-edges equal to energies common to diagnostic radiology can be proper substitutes for lead shields.

  3. Revised SWAT. The integrated burnup calculation code system

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Mochizuki, Hiroki [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kiyosumi, Takehide [The Japan Research Institute, Ltd., Tokyo (Japan)

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  4. Efficient Implementation of the Barnes-Hut Octree Algorithm for Monte Carlo Simulations of Charged Systems

    CERN Document Server

    Gan, Zecheng

    2013-01-01

    Computer simulation with Monte Carlo is an important tool to investigate the function and equilibrium properties of many systems with biological and soft matter materials solvable in solvents. The appropriate treatment of long-range electrostatic interaction is essential for these charged systems, but remains a challenging problem for large-scale simulations. We have developed an efficient Barnes-Hut treecode algorithm for electrostatic evaluation in Monte Carlo simulations of Coulomb many-body systems. The algorithm is based on a divide-and-conquer strategy and fast update of the octree data structure in each trial move through a local adjustment procedure. We test the accuracy of the tree algorithm, and use it to computer simulations of electric double layer near a spherical interface. It has been shown that the computational cost of the Monte Carlo method with treecode acceleration scales as $\\log N$ in each move. For a typical system with ten thousand particles, by using the new algorithm, the speed has b...

  5. Coarse-grained stochastic processes and Monte Carlo simulations in lattice systems

    CERN Document Server

    Katsoulakis, M A; Vlachos, D G

    2003-01-01

    In this paper we present a new class of coarse-grained stochastic processes and Monte Carlo simulations, derived directly from microscopic lattice systems and describing mesoscopic length scales. As our primary example, we mainly focus on a microscopic spin-flip model for the adsorption and desorption of molecules between a surface adjacent to a gas phase, although a similar analysis carries over to other processes. The new model can capture large scale structures, while retaining microscopic information on intermolecular forces and particle fluctuations. The requirement of detailed balance is utilized as a systematic design principle to guarantee correct noise fluctuations for the coarse-grained model. We carry out a rigorous asymptotic analysis of the new system using techniques from large deviations and present detailed numerical comparisons of coarse-grained and microscopic Monte Carlo simulations. The coarse-grained stochastic algorithms provide large computational savings without increasing programming ...

  6. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  7. RUN DMC: An efficient, parallel code for analyzing Radial Velocity Observations using N-body Integrations and Differential Evolution Markov chain Monte Carlo

    CERN Document Server

    Nelson, Benjamin E; Payne, Matthew J

    2013-01-01

    In the 20+ years of Doppler observations of stars, scientists have uncovered a diverse population of extrasolar multi-planet systems. A common technique for characterizing the orbital elements of these planets is Markov chain Monte Carlo (MCMC), using a Keplerian model with random walk proposals and paired with the Metropolis-Hastings algorithm. For approximately a couple of dozen planetary systems with Doppler observations, there are strong planet-planet interactions due to the system being in or near a mean-motion resonance (MMR). An N-body model is often required to accurately describe these systems. Further computational difficulties arise from exploring a high-dimensional parameter space ($\\sim$7 x number of planets) that can have complex parameter correlations. To surmount these challenges, we introduce a differential evolution MCMC (DEMCMC) applied to radial velocity data while incorporating self-consistent N-body integrations. Our Radial velocity Using N-body DEMCMC (RUN DMC) algorithm improves upon t...

  8. Evaluation of the material assignment method used by a Monte Carlo treatment planning system.

    Science.gov (United States)

    Isambert, A; Brualla, L; Lefkopoulos, D

    2009-12-01

    An evaluation of the conversion process from Hounsfield units (HU) to material composition in computerised tomography (CT) images, employed by the Monte Carlo based treatment planning system ISOgray (DOSIsoft), is presented. A boundary in the HU for the material conversion between "air" and "lung" materials was determined based on a study using 22 patients. The dosimetric consequence of the new boundary was quantitatively evaluated for a lung patient plan.

  9. Calculation of electron dose to target cells in a complex environment by Monte Carlo code ''CELLDOSE''

    Energy Technology Data Exchange (ETDEWEB)

    Hindie, Elif; Moretti, Jean-Luc [Hopital Saint-Louis, Service de Medecine Nucleaire, Paris (France)]|[Universite Paris 7, Imagerie Moleculaire Diagnostique et Ciblage Therapeutique, Paris (France); Champion, Christophe [Universite Paul Verlaine, Laboratoire de Physique Moleculaire et des Collisions, Metz Institut de Physique, Metz (France); Zanotti-Fregonara, Paolo; Ravasi, Laura [Commissariat a l' Energie Atomique, DSV/I2BM/SHFJ/LIME, Orsay (France); Rubello, Domenico [Instituto Oncologico Veneto (IOV) - IRCCS, Department of Nuclear Medicine - PET Centre, Rovigo (Italy); Colas-Linhart, Nicole [Faculte de Medecine Xavier Bichat, Laboratoire de Biophysique, Paris (France)

    2009-01-15

    We used the Monte Carlo code ''CELLDOSE'' to assess the dose received by specific target cells from electron emissions in a complex environment. {sup 131}I in a simulated thyroid was used as a model. Thyroid follicles were represented by 170{mu}m diameter spherical units made of a lumen of 150{mu}m diameter containing colloidal matter and a peripheral layer of 10{mu}m thick thyroid cells. Neighbouring follicles are 4{mu}m apart. {sup 131}I was assumed to be homogeneously distributed in the lumen and absent in cells. We firstly assessed electron dose distribution in a single follicle. Then, we expanded the simulation by progressively adding neighbouring layers of follicles, so to reassess the electron dose to this single follicle implemented with the contribution of the added layers. Electron dose gradient around a point source showed that the {sup 131}I electron dose is close to zero after 2,100{mu}m. Therefore, we studied all contributions to the central follicle deriving from follicles within 12 orders of neighbourhood (15,624 follicles surrounding the central follicle). The dose to colloid of the single follicle was twice as high as the dose to thyroid cells. Even when all neighbours were taken into account, the dose in the central follicle remained heterogeneous. For a 1-Gy average dose to tissue, the dose to colloidal matter was 1.168 Gy, the dose to thyroid cells was 0.982 Gy, and the dose to the inter-follicular tissue was 0.895 Gy. Analysis of the different contributions to thyroid cell dose showed that 17.3% of the dose derived from the colloidal matter of their own follicle, while the remaining 82.7% was delivered by the surrounding follicles. On the basis of these data, it is shown that when different follicles contain different concentrations of {sup 131}I, the impact in terms of cell dose heterogeneity can be important. By means of {sup 131}I in the thyroid as a theoretical model, we showed how a Monte Carlo code can be used to map

  10. Monte Carlo determination of the conversion coefficients Hp(3)/Ka in a right cylinder phantom with 'PENELOPE' code. Comparison with 'MCNP' simulations.

    Science.gov (United States)

    Daures, J; Gouriou, J; Bordy, J M

    2011-03-01

    This work has been performed within the frame of the European Union ORAMED project (Optimisation of RAdiation protection for MEDical staff). The main goal of the project is to improve standards of protection for medical staff for procedures resulting in potentially high exposures and to develop methodologies for better assessing and for reducing, exposures to medical staff. The Work Package WP2 is involved in the development of practical eye-lens dosimetry in interventional radiology. This study is complementary of the part of the ENEA report concerning the calculations with the MCNP-4C code of the conversion factors related to the operational quantity H(p)(3). In this study, a set of energy- and angular-dependent conversion coefficients (H(p)(3)/K(a)), in the newly proposed square cylindrical phantom made of ICRU tissue, have been calculated with the Monte-Carlo code PENELOPE and MCNP5. The H(p)(3) values have been determined in terms of absorbed dose, according to the definition of this quantity, and also with the kerma approximation as formerly reported in ICRU reports. At a low-photon energy (up to 1 MeV), the two results obtained with the two methods are consistent. Nevertheless, large differences are showed at a higher energy. This is mainly due to the lack of electronic equilibrium, especially for small angle incidences. The values of the conversion coefficients obtained with the MCNP-4C code published by ENEA quite agree with the kerma approximation calculations obtained with PENELOPE. We also performed the same calculations with the code MCNP5 with two types of tallies: F6 for kerma approximation and *F8 for estimating the absorbed dose that is, as known, due to secondary electrons. PENELOPE and MCNP5 results agree for the kerma approximation and for the absorbed dose calculation of H(p)(3) and prove that, for photon energies larger than 1 MeV, the transport of the secondary electrons has to be taken into account.

  11. Calculation of absorbed doses in sphere volumes around the Mammosite using the Monte Carlo simulation code MCNPX; Calculo de dosis absorbida en volumenes esfericos alrededor del Mammosite utilizando el codigo de simulacion Monte Carlo MCNPX

    Energy Technology Data Exchange (ETDEWEB)

    Rojas C, E. L. [ININ, Carretera Mexico-Toluca s/n, Ocoyoacac 52750, Estado de Mexico (Mexico)

    2008-07-01

    The objective of this study is to investigate the changes observed in the absorbed doses in mammary gland tissue when irradiated with a equipment of high dose rate known as Mammosite and introducing material resources contrary to the tissue that constitutes the mammary gland. The modeling study is performed with the code MCNPX, 2005 version, the equipment and the mammary gland and calculating the absorbed doses in tissue when introduced small volumes of air or calcium in the system. (Author)

  12. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    Science.gov (United States)

    Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.

  13. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  14. Comparing Subspace Methods for Closed Loop Subspace System Identification by Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    2009-10-01

    Full Text Available A novel promising bootstrap subspace system identification algorithm for both open and closed loop systems is presented. An outline of the SSARX algorithm by Jansson (2003 is given and a modified SSARX algorithm is presented. Some methods which are consistent for closed loop subspace system identification presented in the literature are discussed and compared to a recently published subspace algorithm which works for both open as well as for closed loop data, i.e., the DSR_e algorithm as well as the bootstrap method. Experimental comparisons are performed by Monte Carlo simulations.

  15. Improving the efficiency of Monte Carlo simulations of systems that undergo temperature-driven phase transitions

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2013-07-01

    Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.

  16. Monte Carlo calculation for the development of a BNCT neutron source (1eV-10KeV) using MCNP code.

    Science.gov (United States)

    El Moussaoui, F; El Bardouni, T; Azahra, M; Kamili, A; Boukhal, H

    2008-09-01

    Different materials have been studied in order to produce the epithermal neutron beam between 1eV and 10KeV, which are extensively used to irradiate patients with brain tumors such as GBM. For this purpose, we have studied three different neutrons moderators (H(2)O, D(2)O and BeO) and their combinations, four reflectors (Al(2)O(3), C, Bi, and Pb) and two filters (Cd and Bi). Results of calculation showed that the best obtained assembly configuration corresponds to the combination of the three moderators H(2)O, BeO and D(2)O jointly to Al(2)O(3) reflector and two filter Cd+Bi optimize the spectrum of the epithermal neutron at 72%, and minimize the thermal neutron to 4% and thus it can be used to treat the deep tumor brain. The calculations have been performed by means of the Monte Carlo N (particle code MCNP 5C). Our results strongly encourage further studying of irradiation of the head with epithermal neutron fields.

  17. Polynomial system solving for decoding linear codes and algebraic cryptanalysis

    OpenAIRE

    2009-01-01

    This thesis is devoted to applying symbolic methods to the problems of decoding linear codes and of algebraic cryptanalysis. The paradigm we employ here is as follows. We reformulate the initial problem in terms of systems of polynomial equations over a finite field. The solution(s) of such systems should yield a way to solve the initial problem. Our main tools for handling polynomials and polynomial systems in such a paradigm is the technique of Gröbner bases and normal form reductions. The ...

  18. Photovoltaic Power Systems and the National Electrical Code: Suggested Practices

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-02-01

    This guide provides information on how the National Electrical Code (NEC) applies to photovoltaic systems. The guide is not intended to supplant or replace the NEC; it paraphrases the NEC where it pertains to photovoltaic systems and should be used with the full text of the NEC. Users of this guide should be thoroughly familiar with the NEC and know the engineering principles and hazards associated with electrical and photovoltaic power systems. The information in this guide is the best available at the time of publication and is believed to be technically accurate; it will be updated frequently.

  19. Photovoltaic power systems and the National Electrical Code: Suggested practices

    Energy Technology Data Exchange (ETDEWEB)

    Wiles, J. [New Mexico State Univ., Las Cruces, NM (United States). Southwest Technology Development Inst.

    1996-12-01

    This guide provides information on how the National Electrical Code (NEC) applies to photovoltaic systems. The guide is not intended to supplant or replace the NEC; it paraphrases the NEC where it pertains to photovoltaic systems and should be used with the full text of the NEC. Users of this guide should be thoroughly familiar with the NEC and know the engineering principles and hazards associated with electrical and photovoltaic power systems. The information in this guide is the best available at the time of publication and is believed to be technically accurate; it will be updated frequently. Application of this information and results obtained are the responsibility of the user.

  20. Channel estimation for physical layer network coding systems

    CERN Document Server

    Gao, Feifei; Wang, Gongpu

    2014-01-01

    This SpringerBrief presents channel estimation strategies for the physical later network coding (PLNC) systems. Along with a review of PLNC architectures, this brief examines new challenges brought by the special structure of bi-directional two-hop transmissions that are different from the traditional point-to-point systems and unidirectional relay systems. The authors discuss the channel estimation strategies over typical fading scenarios, including frequency flat fading, frequency selective fading and time selective fading, as well as future research directions. Chapters explore the performa

  1. Dose perturbation in the presence of metallic implants: treatment planning system versus Monte Carlo simulations

    Science.gov (United States)

    Wieslander, Elinore; Knöös, Tommy

    2003-10-01

    An increasing number of patients receiving radiation therapy have metallic implants such as hip prostheses. Therefore, beams are normally set up to avoid irradiation through the implant; however, this cannot always be accomplished. In such situations, knowledge of the accuracy of the used treatment planning system (TPS) is required. Two algorithms, the pencil beam (PB) and the collapsed cone (CC), are implemented in the studied TPS. Comparisons are made with Monte Carlo simulations for 6 and 18 MV. The studied materials are steel, CoCrMo, Orthinox® (a stainless steel alloy and registered trademark of Stryker Corporation), TiAlV and Ti. Monte Carlo simulated depth dose curves and dose profiles are compared to CC and PB calculated data. The CC algorithm shows overall a better agreement with Monte Carlo than the PB algorithm. Thus, it is recommended to use the CC algorithm to get the most accurate dose calculation both for the planning target volume and for tissues adjacent to the implants when beams are set up to pass through implants.

  2. Research of Wavelet Based Multicarrier Modulation System with Near Shannon Limited Codes

    Institute of Scientific and Technical Information of China (English)

    ZHANGHaixia; YUANDongfeng; ZHAOFeng

    2005-01-01

    In this paper, by using turbo codes and Low density parity codes (LDPC) as channel correcting code scheme, Wavelet based multicarrier modulation (WMCM) systems are proposed and investigated on different transmission scenarios. The Bit error rate (BER) performance of these two near Shannon limited codes is simulated and compared with various code parameters. Simulated results show that Turbo coded WMCM (TCWMCM) performs better than LDPC coded WMCM (LDPC-CWMCM) on both AWGN and Rayleigh fading channels when these two kinds of codes are of the same code parameters.

  3. Efficiency of rejection-free dynamic Monte Carlo methods for homogeneous spin models, hard disk systems, and hard sphere systems.

    Science.gov (United States)

    Watanabe, Hiroshi; Yukawa, Satoshi; Novotny, M A; Ito, Nobuyasu

    2006-08-01

    We construct asymptotic arguments for the relative efficiency of rejection-free Monte Carlo (MC) methods compared to the standard MC method. We find that the efficiency is proportional to exp(constbeta) in the Ising, sqrt[beta] in the classical XY, and beta in the classical Heisenberg spin systems with inverse temperature beta, regardless of the dimension. The efficiency in hard particle systems is also obtained, and found to be proportional to (rho(cp)-rho)(-d) with the closest packing density rho(cp), density rho, and dimension d of the systems. We construct and implement a rejection-free Monte Carlo method for the hard-disk system. The RFMC has a greater computational efficiency at high densities, and the density dependence of the efficiency is as predicted by our arguments.

  4. Stellarator-specific developments for the systems code PROCESS

    Energy Technology Data Exchange (ETDEWEB)

    Warmer, Felix; Beidler, Craig; Dinklage, Andreas; Feng, Yuehe; Geiger, Joachim; Schauer, Felix; Turkin, Yuriy; Wolf, Robert; Xanthopoulos, Pavlos [Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstrasse 1, D-17491 Greifswald (Germany); Knight, Peter; Ward, David [Culham Centre for Fusion Energy, Abingdon, Oxfordshire, OX14 3DB (United Kingdom)

    2014-07-01

    The ultimate goal of fusion research is to demonstrate the feasibility of economic production of electricity. The most promising concepts to achieve this by magnetic confinement are the Tokamak and the Stellarator. System codes are used to study the general properties of a fusion power plant. Built in a modular way systems codes describe the physical and technical properties of the power plant components. For the Helical Advanced Stellarator (HELIAS) concept modules have been developed in the frame of the existing Tokamak systems code PROCESS. These include: A geometry model based on Fourier coefficients which represent the complex 3-D plasma shape, a divertor model which assumes diffusive cross-field transport and high radiation at the X-point, a coil model which uses a scaling based on the HELIAS design and a transport model which either employs empirical confinement time scalings or sophisticated 1-D collisional and turbulent transport calculations. This approach aims at a direct comparison between Tokamak and Stellarator power plant designs.

  5. The ICPC coding system in pharmacy : developing a subset, ICPC-Ph

    NARCIS (Netherlands)

    van Mil, JWF; Brenninkmeijer, R; Tromp, TFJ

    1998-01-01

    The ICPC system is a coding system developed for general medical practice, to be able to code the GP-patient encounters and other actions. Some of the codes can be easily used by community pharmacists to code complaints and diseases in pharmaceutical care practice. We developed a subset of the ICPC

  6. Advanced coding techniques for few mode transmission systems.

    Science.gov (United States)

    Okonkwo, Chigo; van Uden, Roy; Chen, Haoshuo; de Waardt, Huug; Koonen, Ton

    2015-01-26

    We experimentally verify the advantage of employing advanced coding schemes such as space-time coding and 4 dimensional modulation formats to enhance the transmission performance of a 3-mode transmission system. The performance gain of space-time block codes for extending the optical signal-to-noise ratio tolerance in multiple-input multiple-output optical coherent spatial division multiplexing transmission systems with respect to single-mode transmission performance are evaluated. By exploiting the spatial diversity that few-mode-fibers offer, with respect to single mode fiber back-to-back performance, significant OSNR gains of 3.2, 4.1, 4.9, and 6.8 dB at the hard-decision forward error correcting limit are demonstrated for DP-QPSK 8, 16 and 32 QAM, respectively. Furthermore, by employing 4D constellations, 6 × 28Gbaud 128 set partitioned quadrature amplitude modulation is shown to outperform conventional 8 QAM transmission performance, whilst carrying an additional 0.5 bit/symbol.

  7. Research and implementation of flexible coding system oriented multi-view

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xuhui; ZHANG Xu; NING Ruxin

    2007-01-01

    On the basis of the requirements of a product data management system (PDM) for the flexible coding system,the principle of the flexible coding system oriented multiview is analyzed. Generation and utilization of coding should be associated with the context of the object. The architecture of the flexible coding system oriented multi-view is studied and the implementation class diagram of the system is designed. The system can support the establishment of five types of code segments, provide the tools of flexible defining coding rules and drive the automatic generation of object coding in different views (contexts). On the foundation of the characteristics of the system, coding for parts is taken as a sample to validate and elaborate the flexible coding process of the system.

  8. CARMEN: a system Monte Carlo based on linear programming from direct openings; CARMEN: Un sistema de planficiacion Monte Carlo basado en programacion lineal a partir de aberturas directas

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.

    2013-07-01

    The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)

  9. [Behavior ethogram and PAE coding system of Cervus nippon sichuanicus].

    Science.gov (United States)

    Qi, Wen-Hua; Yue, Bi-Song; Ning, Ji-Zu; Jiang, Xue-Mei; Quan, Qiu-Mei; Guo, Yan-Shu; Mi, Jun; Zuo, Lin; Xiong, Yuan-Qing

    2010-02-01

    A monthly 5-day periodic observation at 06:00-18:00 from March to November 2007 was conducted to record the behavioral processes, contents, and results, and the surrounding habitats of Sichuan sika deer (Cervus nippon sichuanicus) in Donglie, Chonger, and Reer villages of Tiebu Natural Reserve of Sichuan Province. The behavioral ethogram, vigilance behaviors ethogram and its PAE (posture, act, and environment) coding system of the Sichuan sika deer were established, which filled the gap of the PAE coding of ungulates vigilance behaviors. A total of 11 kinds of postures, 83 acts, and 136 behaviors were recorded and distinguished, with the relative frequency of each behavior in relation to gender, age, and season described. Compared with other ungulates, the behavioral repertoire of Sichuan sika deer was mostly similar to that of other cervid animals.

  10. Medium-rate speech coding simulator for mobile satellite systems

    Science.gov (United States)

    Copperi, Maurizio; Perosino, F.; Rusina, F.; Albertengo, G.; Biglieri, E.

    1986-01-01

    Channel modeling and error protection schemes for speech coding are described. A residual excited linear predictive (RELP) coder for bit rates 4.8, 7.2, and 9.6 kbit/sec is outlined. The coder at 9.6 kbit/sec incorporates a number of channel error protection techniques, such as bit interleaving, error correction codes, and parameter repetition. Results of formal subjective experiments (DRT and DAM tests) under various channel conditions, reveal that the proposed coder outperforms conventional LPC-10 vocoders by 2 subjective categories, thus confirming the suitability of the RELP coder at 9.6 kbit/sec for good quality speech transmission in mobile satellite systems.

  11. Security Concerns and Countermeasures in Network Coding Based Communications Systems

    DEFF Research Database (Denmark)

    Talooki, Vahid; Bassoli, Riccardo; Roetter, Daniel Enrique Lucani

    2015-01-01

    This survey paper shows the state of the art in security mechanisms, where a deep review of the current research and the status of this topic is carried out. We start by introducing network coding and its variety applications in enhancing current traditional networks. In particular, we analyze two...... key protocol types, namely, state-aware and stateless protocols, specifying the benefits and disadvantages of each one of them. We also present the key security assumptions of network coding (NC) systems as well as a detailed analysis of the security goals and threats, both passive and active....... This paper also presents a detailed taxonomy and a timeline of the different NC security mechanisms and schemes reported in the literature. Current proposed security mechanisms and schemes for NC in the literature are classified later. Finally a timeline of these mechanism and schemes is presented....

  12. Analysis of the track- and dose-averaged LET and LET spectra in proton therapy using the GEANT4 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Guan, Fada; Peeler, Christopher; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Mohan, Radhe; Titt, Uwe, E-mail: UTitt@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Bronk, Lawrence [Department of Experimental Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Geng, Changran [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China and Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Grosshans, David [Department of Experimental Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)

    2015-11-15

    Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to

  13. Advanced Error-Control Coding Methods Enhance Reliability of Transmission and Storage Data Systems

    Directory of Open Access Journals (Sweden)

    K. Vlcek

    2003-04-01

    Full Text Available Iterative coding systems are currently being proposed and acceptedfor many future systems as next generation wireless transmission andstorage systems. The text gives an overview of the state of the art initerative decoded FEC (Forward Error-Correction error-control systems.Such systems can typically achieve capacity to within a fraction of adB at unprecedented low complexities. Using a single code requires verylong code words, and consequently very complex coding system. One wayaround the problem of achieving very low error probabilities is turbocoding (TC application. A general model of concatenated coding systemis shown - an algorithm of turbo codes is given in this paper.

  14. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  15. Performance of Superposition Coded Broadcast/Unicast Service Overlay System

    Science.gov (United States)

    Yoon, Seokhyun; Kim, Donghee

    The system level performance of a superposition coded broadcast/unicast service overlay system is considered. Cellular network for unicast service only is considered as interference limited system, where increasing the transmission power does not help improve the network throughput especially when the frequency reuse factor is close to 1. In such cases, the amount of power that does not contribute to improving the throughput can be considered as “unused.” This situation motivates us to use the unused power for broadcast services, which can be efficiently provided in OFDM based single frequency networks as in digital multimedia broadcast systems. In this paper, we investigate the performance of such a broadcast/unicast overlay system in which a single frequency broadcast service is superimposed over a unicast cellular service. Alternative service multiplexing using FDM/TDM is also considered for comparison.

  16. Hierarchical sparse coding in the sensory system of Caenorhabditis elegans.

    Science.gov (United States)

    Zaslaver, Alon; Liani, Idan; Shtangel, Oshrat; Ginzburg, Shira; Yee, Lisa; Sternberg, Paul W

    2015-01-27

    Animals with compact sensory systems face an encoding problem where a small number of sensory neurons are required to encode information about its surrounding complex environment. Using Caenorhabditis elegans worms as a model, we ask how chemical stimuli are encoded by a small and highly connected sensory system. We first generated a comprehensive library of transgenic worms where each animal expresses a genetically encoded calcium indicator in individual sensory neurons. This library includes the vast majority of the sensory system in C. elegans. Imaging from individual sensory neurons while subjecting the worms to various stimuli allowed us to compile a comprehensive functional map of the sensory system at single neuron resolution. The functional map reveals that despite the dense wiring, chemosensory neurons represent the environment using sparse codes. Moreover, although anatomically closely connected, chemo- and mechano-sensory neurons are functionally segregated. In addition, the code is hierarchical, where few neurons participate in encoding multiple cues, whereas other sensory neurons are stimulus specific. This encoding strategy may have evolved to mitigate the constraints of a compact sensory system.

  17. Finite Size Effect in Path Integral Monte Carlo Simulations of 4He Systems

    Institute of Scientific and Technical Information of China (English)

    ZHAO Xing-Wen; CHENG Xin-Lu

    2008-01-01

    Path integral Monte Carlo (PIMC) simulations are a powerful computational method to study interacting quantum systems at finite temperatures. In this work, PIMC has been applied to study the finite size effect of the simulated systems of 4He. We determine the energy as a function of temperature at saturated-vapor-pressure (SVP) conditions in the temperature range of T ∈ [1.0 K,4.0 K], and the equation of state (EOS) in the ground state for systems consisted of 32, 64 and 128 4He atoms, respectively. We find that the energy at SVP is influenced significantly by the size of the simulated system in the temperature range of T ∈ [2.1 K, 3.0 K] and the larger the system is, the better results are obtained in comparison with the experimental values; while the EOS appeared to be unrelated to it.

  18. PARALLELIZATION AND PERFECTION OF MCNP MONTE CARLO PARTICLE TRANSPORT CODE IN MPI%粒子输运蒙特卡罗程序MCNP在MPI下的并行化及完善

    Institute of Scientific and Technical Information of China (English)

    邓力; 刘杰; 张文勇

    2003-01-01

    The particle transport Monte Carlo code MCNP had been realized the paral-lelization in MPI (Message Passing Interface) in 1999. But due to adopting the leap random number producer, some differences were existed between the parallel result and the serial result. Now the same results have been achieved by using the segment random number. The speedup of the applied problem is the liner ups to 53 in 64-Processors and the parallel efficiencv is up to 83% in 64-Processors.

  19. Event-chain Monte Carlo algorithms for hard-sphere systems.

    Science.gov (United States)

    Bernard, Etienne P; Krauth, Werner; Wilson, David B

    2009-11-01

    In this paper we present the event-chain algorithms, which are fast Markov-chain Monte Carlo methods for hard spheres and related systems. In a single move of these rejection-free methods, an arbitrarily long chain of particles is displaced, and long-range coherent motion can be induced. Numerical simulations show that event-chain algorithms clearly outperform the conventional Metropolis method. Irreversible versions of the algorithms, which violate detailed balance, improve the speed of the method even further. We also compare our method with a recent implementations of the molecular-dynamics algorithm.

  20. Algorithm and application of Monte Carlo simulation for multi-dispersive copolymerization system

    Institute of Scientific and Technical Information of China (English)

    凌君; 沈之荃; 陈万里

    2002-01-01

    A Monte Carlo algorithm has been established for multi-dispersive copolymerization system, based on the experimental data of copolymer molecular weight and dispersion via GPC measurement. The program simulates the insertion of every monomer unit and records the structure and microscopical sequence of every chain in various lengths. It has been applied successfully for the ring-opening copolymerization of 2,2-dimethyltrimethylene carbonate (DTC) with (-caprolactone (ε-CL). The simulation coincides with the experimental results and provides microscopical data of triad fractions, lengths of homopolymer segments, etc., which are difficult to obtain by experiments. The algorithm presents also a uniform frame for copolymerization studies under other complicated mechanisms.

  1. Performance of a space-time block coded code division multiple access system over Nakagami-m fading channels

    Science.gov (United States)

    Yu, Xiangbin; Dong, Tao; Xu, Dazhuan; Bi, Guangguo

    2010-09-01

    By introducing an orthogonal space-time coding scheme, multiuser code division multiple access (CDMA) systems with different space time codes are given, and corresponding system performance is investigated over a Nakagami-m fading channel. A low-complexity multiuser receiver scheme is developed for space-time block coded CDMA (STBC-CDMA) systems. The scheme can make full use of the complex orthogonality of space-time block coding to simplify the high decoding complexity of the existing scheme. Compared to the existing scheme with exponential decoding complexity, it has linear decoding complexity. Based on the performance analysis and mathematical calculation, the average bit error rate (BER) of the system is derived in detail for integer m and non-integer m, respectively. As a result, a tight closed-form BER expression is obtained for STBC-CDMA with an orthogonal spreading code, and an approximate closed-form BER expression is attained for STBC-CDMA with a quasi-orthogonal spreading code. Simulation results show that the proposed scheme can achieve almost the same performance as the existing scheme with low complexity. Moreover, the simulation results for average BER are consistent with the theoretical analysis.

  2. Validation of a commercial TPS based on the VMC(++) Monte Carlo code for electron beams: commissioning and dosimetric comparison with EGSnrc in homogeneous and heterogeneous phantoms.

    Science.gov (United States)

    Ferretti, A; Martignano, A; Simonato, F; Paiusco, M

    2014-02-01

    The aim of the present work was the validation of the VMC(++) Monte Carlo (MC) engine implemented in the Oncentra Masterplan (OMTPS) and used to calculate the dose distribution produced by the electron beams (energy 5-12 MeV) generated by the linear accelerator (linac) Primus (Siemens), shaped by a digital variable applicator (DEVA). The BEAMnrc/DOSXYZnrc (EGSnrc package) MC model of the linac head was used as a benchmark. Commissioning results for both MC codes were evaluated by means of 1D Gamma Analysis (2%, 2 mm), calculated with a home-made Matlab (The MathWorks) program, comparing the calculations with the measured profiles. The results of the commissioning of OMTPS were good [average gamma index (γ) > 97%]; some mismatches were found with large beams (size ≥ 15 cm). The optimization of the BEAMnrc model required to increase the beam exit window to match the calculated and measured profiles (final average γ > 98%). Then OMTPS dose distribution maps were compared with DOSXYZnrc with a 2D Gamma Analysis (3%, 3 mm), in 3 virtual water phantoms: (a) with an air step, (b) with an air insert, and (c) with a bone insert. The OMTPD and EGSnrc dose distributions with the air-water step phantom were in very high agreement (γ ∼ 99%), while for heterogeneous phantoms there were differences of about 9% in the air insert and of about 10-15% in the bone region. This is due to the Masterplan implementation of VMC(++) which reports the dose as "dose to water", instead of "dose to medium".

  3. Output factor comparison of Monte Carlo and measurement for Varian TrueBeam 6 MV and 10 MV flattening filter-free stereotactic radiosurgery system.

    Science.gov (United States)

    Cheng, Jason Y; Ning, Holly; Arora, Barbara C; Zhuge, Ying; Miller, Robert W

    2016-05-08

    The dose measurements of the small field sizes, such as conical collimators used in stereotactic radiosurgery (SRS), are a significant challenge due to many factors including source occlusion, detector size limitation, and lack of lateral electronic equilibrium. One useful tool in dealing with the small field effect is Monte Carlo (MC) simulation. In this study, we report a comparison of Monte Carlo simulations and measurements of output factors for the Varian SRS system with conical collimators for energies of 6 MV flattening filter-free (6 MV) and 10 MV flattening filter-free (10 MV) on the TrueBeam accelerator. Monte Carlo simulations of Varian's SRS system for 6 MV and 10 MV photon energies with cones sizes of 17.5 mm, 15.0 mm, 12.5 mm, 10.0 mm, 7.5 mm, 5.0 mm, and 4.0 mm were performed using EGSnrc (release V4 2.4.0) codes. Varian's version-2 phase-space files for 6 MV and 10 MV of TrueBeam accelerator were utilized in the Monte Carlo simulations. Two small diode detectors Edge (Sun Nuclear) and Small Field Detector (SFD) (IBA Dosimetry) were applied to measure the output factors. Significant errors may result if detector correction factors are not applied to small field dosimetric measurements. Although it lacked the machine-specific kfclin,fmsrQclin,Qmsr correction factors for diode detectors in this study, correction factors were applied utilizing published studies conducted under similar conditions. For cone diameters greater than or equal to 12.5 mm, the differences between output factors for the Edge detector, SFD detector, and MC simulations are within 3.0% for both energies. For cone diameters below 12.5 mm, output factors differences exhibit greater variations.

  4. Keno-Nr a Monte Carlo Code Simulating the Californium -252-SOURCE-DRIVEN Noise Analysis Experimental Method for Determining Subcriticality

    Science.gov (United States)

    Ficaro, Edward Patrick

    The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of

  5. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Pietrzak, Robert [Department of Nuclear Physics and Its Applications, Institute of Physics, University of Silesia, Katowice (Poland); Konefał, Adam, E-mail: adam.konefal@us.edu.pl [Department of Nuclear Physics and Its Applications, Institute of Physics, University of Silesia, Katowice (Poland); Sokół, Maria; Orlef, Andrzej [Department of Medical Physics, Maria Sklodowska-Curie Memorial Cancer Center, Institute of Oncology, Gliwice (Poland)

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method. - Highlights: • Influence of the bin structure on the proton dose distributions was examined for the MC simulations. • The considered relative proton dose distributions in water correspond to the clinical application. • MC simulations performed with the logical detectors and the

  6. Development of Burnup Calculation Function in Reactor Monte Carlo Code RMC%堆用蒙卡程序燃耗计算功能开发

    Institute of Scientific and Technical Information of China (English)

    佘顶; 王侃; 余纲林

    2012-01-01

    This paper presents the burnup calculation capability of RMC, which is a new Monte Carlo (MC) neutron transport code developed by Reactor Engineering Analysis Laboratory (REAL) in Tsinghua university of China. Unlike most of existing MC depletion codes which explicitly couple the depletion module, RMC incorporates ORIGEN 2.1 in an implicit way. Different burn step strategies, including the middle-of-step approximation and the predictor-corrector method, are adopted by RMC to assure the accuracy under large burnup step size. RMC employs a spectrum-based method of tallying one-group cross section, which can considerably saves computational time with negligible accuracy loss. According to the validation results of benchmarks and examples, it is proved that the burnup function of RMC performs quite well in accuracy and efficiency.%堆用蒙卡程序(RMC)是由清华大学工程物理系REAL实验室自主开发的用于反应堆物理分析的中子输运蒙卡程序,本文主要介绍其燃耗计算功能的开发与验证.RMC的燃耗计算功能具有的特点:内部耦合ORIGEN,相比于外耦合方式,更加灵活和高效;使用基于能谱的单群截面统计方法,可在保证精度的前提下,显著提高计算效率;采取预估修正和中点近似等多种燃耗步策略,减小大燃耗步长时的计算误差.通过计算压水堆栅元、沸水堆组件、快堆等一系列基准题和算例,验证了RMC燃耗计算的正确性和速度优势.

  7. New Parallel Interference Cancellation for Convolutionally Coded CDMA Systems

    Institute of Scientific and Technical Information of China (English)

    Xu Guo-xiong; Gan Liang-cai; Huang Tian-xi

    2004-01-01

    Based on BCJR algorithm proposed by Bahl et al and linear soft decision feedback, a reduced-complexity parallel interference cancellation (simplified PIC) for convolutionally coded DS CDMA systems is proposed. By computer simulation, we compare the simplified PIC with the exact PIC. It shows that the simplified PIC can achieve the performance close to the exact PIC if the mean values of coded symbols are linearly computed in terms of the sum of initial a prior log-likelihood rate (LLR) and updated a prior LLR, while a significant performance loss will occur if the mean values of coded symbols are linearly computed in terms of the updated a prior LLR only. Meanwhile, we also compare the simplified PIC with MF receiver and conventional PICs. The simulation results show that the simplified PIC dominantly outperforms the MF receiver and conventional PICs, at signal-noise rate (SNR) of 7 dB, for example, the bit error rate is about 10-4 for the simplified PIC, which is far below that of matched-filter receiver and conventional PIC.

  8. System code improvements for modelling passive safety systems and their validation

    Energy Technology Data Exchange (ETDEWEB)

    Buchholz, Sebastian; Cron, Daniel von der; Schaffrath, Andreas [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    GRS has been developing the system code ATHLET over many years. Because ATHLET, among other codes, is widely used in nuclear licensing and supervisory procedures, it has to represent the current state of science and technology. New reactor concepts such as Generation III+ and IV reactors and SMR are using passive safety systems intensively. The simulation of passive safety systems with the GRS system code ATHLET is still a big challenge, because of non-defined operation points and self-setting operation conditions. Additionally, the driving forces of passive safety systems are smaller and uncertainties of parameters have a larger impact than for active systems. This paper addresses the code validation and qualification work of ATHLET on the example of slightly inclined horizontal heat exchangers, which are e. g. used as emergency condensers (e. g. in the KERENA and the CAREM) or as heat exchanger in the passive auxiliary feed water systems (PAFS) of the APR+.

  9. Code Based Analysis for Object-Oriented Systems

    Institute of Scientific and Technical Information of China (English)

    Swapan Bhattacharya; Ananya Kanjilal

    2006-01-01

    The basic features of object-oriented software makes it difficult to apply traditional testing methods in objectoriented systems. Control Flow Graph (CFG) is a well-known model used for identification of independent paths in procedural software. This paper highlights the problem of constructing CFG in object-oriented systems and proposes a new model named Extended Control Flow Graph (ECFG) for code based analysis of Object-Oriented (OO) software. ECFG is a layered CFG where nodes refer to methods rather than statements. A new metrics - Extended Cyclomatic Complexity (E-CC) is developed which is analogous to McCabe's Cyclomatic Complexity (CC) and refers to the number of independent execution paths within the OO software. The different ways in which CFG's of individual methods are connected in an ECFG are presented and formulas for E-CC for these different cases are proposed. Finally we have considered an example in Java and based on its ECFG, applied these cases to arrive at the E-CC of the total system as well as proposed a methodology for calculating the basis set, i.e., the set of independent paths for the OO system that will help in creation of test cases for code testing.

  10. An engineering code to analyze hypersonic thermal management systems

    Science.gov (United States)

    Vangriethuysen, Valerie J.; Wallace, Clark E.

    1993-01-01

    Thermal loads on current and future aircraft are increasing and as a result are stressing the energy collection, control, and dissipation capabilities of current thermal management systems and technology. The thermal loads for hypersonic vehicles will be no exception. In fact, with their projected high heat loads and fluxes, hypersonic vehicles are a prime example of systems that will require thermal management systems (TMS) that have been optimized and integrated with the entire vehicle to the maximum extent possible during the initial design stages. This will not only be to meet operational requirements, but also to fulfill weight and performance constraints in order for the vehicle to takeoff and complete its mission successfully. To meet this challenge, the TMS can no longer be two or more entirely independent systems, nor can thermal management be an after thought in the design process, the typical pervasive approach in the past. Instead, a TMS that was integrated throughout the entire vehicle and subsequently optimized will be required. To accomplish this, a method that iteratively optimizes the TMS throughout the vehicle will not only be highly desirable, but advantageous in order to reduce the manhours normally required to conduct the necessary tradeoff studies and comparisons. A thermal management engineering computer code that is under development and being managed at Wright Laboratory, Wright-Patterson AFB, is discussed. The primary goal of the code is to aid in the development of a hypersonic vehicle TMS that has been optimized and integrated on a total vehicle basis.

  11. Blind receiver for OFDM systems via sequential Monte Carlo in factor graphs

    Institute of Scientific and Technical Information of China (English)

    CHEN Rong; ZHANG Hai-bin; XU You-yun; LIU Xin-zhao

    2007-01-01

    Estimation and detection algorithms for orthogonal frequency division multiplexing (OFDM) systems can be developed based on the sum-product algorithms, which operate by message passing in factor graphs. In this paper, we apply the sampling method (Monte Carlo) to factor graphs, and then the integrals in the sum-product algorithm can be approximated by sums, which results in complexity reduction. The blind receiver for OFDM systems can be derived via Sequential Monte Carlo(SMC) in factor graphs, the previous SMC blind receiver can be regarded as the special case of the sum-product algorithms using sampling methods. The previous SMC blind receiver for OFDM systems needs generating samples of the channel vector assuming the channel has an a priori Gaussian distribution. In the newly-built blind receiver, we generate samples of the virtual-pilots instead of the channel vector, with channel vector which can be easily computed based on virtual-pilots. As the size of the virtual-pilots space is much smaller than the channel vector space, only small number of samples are necessary, with the blind detection being much simpler. Furthermore, only one pilot tone is needed to resolve phase ambiguity and differential encoding is not used anymore. Finally, the results of computer simulations demonstrate that the proposal can perform well while providing significant complexity reduction.

  12. Validation of MTF measurement for CBCT system using Monte Carlo simulations

    Science.gov (United States)

    Hao, Ting; Gao, Feng; Zhao, Huijuan; Zhou, Zhongxing

    2016-03-01

    To evaluate the spatial resolution performance of cone beam computed tomography (CBCT) system, accurate measurement of the modulation transfer function (MTF) is required. This accuracy depends on the MTF measurement method and CBCT reconstruction algorithms. In this work, the accuracy of MTF measurement of CBCT system using wire phantom is validated by Monte Carlo simulation. A Monte Carlo simulation software tool BEAMnrc/EGSnrc was employed to model X-ray radiation beams and transport. Tungsten wires were simulated with different diameters and radial distances from the axis of rotation. We adopted filtered back projection technique to reconstruct images from 360° acquisition. The MTFs for four reconstruction kernels were measured from corresponding reconstructed wire images, while the ram-lak kernel increased the MTF relative to the cosine, hamming and hann kernel. The results demonstrated that the MTF degraded radially from the axis of rotation. This study suggested that an increase in the MTF for the CBCT system is possible by optimizing scanning settings and reconstruction parameters.

  13. Sample Duplication Method for Monte Carlo Simulation of Large Reaction-Diffusion System

    Institute of Scientific and Technical Information of China (English)

    张红东; 陆建明; 杨玉良

    1994-01-01

    The sample duplication method for the Monte Carlo simulation of large reaction-diffusion system is proposed in this paper. It is proved that the sample duplication method will effectively raise the efficiency and statistical precision of the simulation without changing the kinetic behaviour of the reaction-diffusion system and the critical condition for the bifurcation of the steady-states. The method has been applied to the simulation of spatial and time dissipative structure of Brusselator under the Dirichlet boundary condition. The results presented in this paper definitely show that the sample duplication method provides a very efficient way to sol-’e the master equation of large reaction-diffusion system. For the case of two-dimensional system, it is found that the computation time is reduced at least by a factor of two orders of magnitude compared to the algorithm reported in literature.

  14. A Quantum Monte Carlo Study of mono(benzene)TM and bis(benzene)TM Systems

    CERN Document Server

    Bennett, M Chandler; Mitas, Lubos

    2016-01-01

    We present a study of mono(benzene)TM and bis(benzene)TM systems, where TM={Mo,W}. We calculate the binding energies by quantum Monte Carlo (QMC) approaches and compare the results with other methods and available experiments. The orbitals for the determinantal part of each trial wave function were generated from several types of DFT in order to optimize for fixed-node errors. We estimate and compare the size of the fixed-node errors for both the Mo and W systems with regard to the electron density and degree of localization in these systems. For the W systems we provide benchmarking results of the binding energies, given that experimental data is not available.

  15. A quantum Monte Carlo study of mono(benzene) TM and bis(benzene) TM systems

    Science.gov (United States)

    Bennett, M. Chandler; Kulahlioglu, A. H.; Mitas, L.

    2017-01-01

    We present a study of mono(benzene) TM and bis(benzene) TM systems, where TM = {Mo, W}. We calculate the binding energies by quantum Monte Carlo (QMC) approaches and compare the results with other methods and available experiments. The orbitals for the determinantal part of each trial wave function were generated from several types of DFT functionals in order to optimize for fixed-node errors. We estimate and compare the size of the fixed-node errors for both the Mo and W systems with regard to the electron density and degree of localization in these systems. For the W systems we provide benchmarking results of the binding energies, given that experimental data is not available.

  16. Principle of Line Configuration and Monte-Carlo Simulation for Shared Multi-Channel System

    Institute of Scientific and Technical Information of China (English)

    MIAO Changyun; DAI Jufeng; BAI Zhihui

    2005-01-01

    Based on the steady-state solution of finite-state birth and death process, the principle of line configuration for shared multi-channel system is analyzed. Call congestion ratio equation and channel utilization ratio equation are deduced, and visualized data analysis is presented. The analy-sis indicates that, calculated with the proposed equations, the overestimate for call congestion ratio and channel utilization ratio can be rectified, and thereby the cost of channels can be saved by 20% in a small system.With MATLAB programming, line configuration methods are provided. In order to generally and intuitively show the dynamic running of the system, and to analyze,promote and improve it, the system is simulated using M/M/n/n/m queuing model and Monte-Carlo method. In addition, the simulation validates the correctness of the theoretical analysis and optimizing configuration method.

  17. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  18. Calculation of the X-Ray Spectrum of a Mammography System with Various Voltages and Different Anode-Filter Combinations Using MCNP Code

    OpenAIRE

    Lida Gholamkar; Mahdi Sadeghi; Ali Asghar Mowlavi; Mitra Athari

    2016-01-01

    Introduction One of the best methods in the diagnosis and control of breast cancer is mammography. The importance of mammography is directly related to its value in the detection of breast cancer in the early stages, which leads to a more effective treatment. The purpose of this article was to calculate the X-ray spectrum in a mammography system with Monte Carlo codes, including MCNPX and MCNP5. Materials and Methods The device, simulated using the MCNP code, was Planmed Nuance digital mammog...

  19. EquiFACS: The Equine Facial Action Coding System.

    Directory of Open Access Journals (Sweden)

    Jen Wathan

    Full Text Available Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS and consistently code behavioural sequences was high--and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats. EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.

  20. EquiFACS: The Equine Facial Action Coding System.

    Science.gov (United States)

    Wathan, Jen; Burrows, Anne M; Waller, Bridget M; McComb, Karen

    2015-01-01

    Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS) provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus) through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS) and consistently code behavioural sequences was high--and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats). EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.

  1. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  2. Electronic health record standards, coding systems, frameworks, and infrastructures

    CERN Document Server

    Sinha, Pradeep K; Bendale, Prashant; Mantri, Manisha; Dande, Atreya

    2013-01-01

    Discover How Electronic Health Records Are Built to Drive the Next Generation of Healthcare Delivery The increased role of IT in the healthcare sector has led to the coining of a new phrase ""health informatics,"" which deals with the use of IT for better healthcare services. Health informatics applications often involve maintaining the health records of individuals, in digital form, which is referred to as an Electronic Health Record (EHR). Building and implementing an EHR infrastructure requires an understanding of healthcare standards, coding systems, and frameworks. This book provides an

  3. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    Science.gov (United States)

    2011-03-24

    to the lens to allow only green light to pass through the lens system, thereby reducing any chromatic aberration . The apertures consist of chrome...coded aperture configurations. Also, a green P01 filter is added to the front of the lens to prevent chromatic aberration . To measure the ( )psf aI s...pointing vector to pixel plane coordinates n/a T C pIX Translation matrix from pixel plane coordinates to pointing vector n/a W (7] , ’ ,sJ Aberration

  4. A LONE code for the sparse control of quantum systems

    Science.gov (United States)

    Ciaramella, G.; Borzì, A.

    2016-03-01

    In many applications with quantum spin systems, control functions with a sparse and pulse-shaped structure are often required. These controls can be obtained by solving quantum optimal control problems with L1-penalized cost functionals. In this paper, the MATLAB package LONE is presented aimed to solving L1-penalized optimal control problems governed by unitary-operator quantum spin models. This package implements a new strategy that includes a globalized semi-smooth Krylov-Newton scheme and a continuation procedure. Results of numerical experiments demonstrate the ability of the LONE code in computing accurate sparse optimal control solutions.

  5. Nonterminals, homomorphisms and codings in different variations of OL-systems. II. Nondeterministic systems

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto

    1974-01-01

    Continuing the work begun in Part I of this paper, we consider now variations of nondeterministic OL-systems. The present Part II of the paper contains a systematic classification of the effect of nonterminals, codings, weak codings, nonerasing homomorphisms and homomorphisms for all basic variat...

  6. 76 FR 4113 - Federal Procurement Data System Product Service Code Manual Update

    Science.gov (United States)

    2011-01-24

    ... ADMINISTRATION Federal Procurement Data System Product Service Code Manual Update AGENCY: Office of... the Products and Services Code (PSC) Manual, which provides codes to describe products, services, and... pat.brooks@gsa.gov . SUPPLEMENTARY INFORMATION: The Products and Services Code (PSC) Manual...

  7. [Data coding in the Israeli healthcare system - do choices provide the answers to our system's needs?].

    Science.gov (United States)

    Zelingher, Julian; Ash, Nachman

    2013-05-01

    The IsraeLi healthcare system has undergone major processes for the adoption of health information technologies (HIT), and enjoys high Levels of utilization in hospital and ambulatory care. Coding is an essential infrastructure component of HIT, and ts purpose is to represent data in a simplified and common format, enhancing its manipulation by digital systems. Proper coding of data enables efficient identification, storage, retrieval and communication of data. UtiLization of uniform coding systems by different organizations enables data interoperability between them, facilitating communication and integrating data elements originating in different information systems from various organizations. Current needs in Israel for heaLth data coding include recording and reporting of diagnoses for hospitalized patients, outpatients and visitors of the Emergency Department, coding of procedures and operations, coding of pathology findings, reporting of discharge diagnoses and causes of death, billing codes, organizational data warehouses and national registries. New national projects for cLinicaL data integration, obligatory reporting of quality indicators and new Ministry of Health (MOH) requirements for HIT necessitate a high Level of interoperability that can be achieved only through the adoption of uniform coding. Additional pressures were introduced by the USA decision to stop the maintenance of the ICD-9-CM codes that are also used by Israeli healthcare, and the adoption of ICD-10-C and ICD-10-PCS as the main coding system for billing purpose. The USA has also mandated utilization of SNOMED-CT as the coding terminology for the ELectronic Health Record problem list, and for reporting quality indicators to the CMS. Hence, the Israeli MOH has recently decided that discharge diagnoses will be reported using ICD-10-CM codes, and SNOMED-CT will be used to code the cLinical information in the EHR. We reviewed the characteristics, strengths and weaknesses of these two coding

  8. Monte Carlo calculations on the magnetization profile and domain wall structure in bulk systems and nanoconstricitons

    Energy Technology Data Exchange (ETDEWEB)

    Serena, P. A. [Instituto de Ciencias de Materiales de Madrid, Madrid (Spain); Costa-Kraemer, J. L. [Instituto de Microelectronica de Madrid, Madrid (Spain)

    2001-03-01

    A Monte Carlo algorithm suitable to study systems described by an anisotropic Heisenberg Hamiltonian is presented. This technique has been tested successfully with 3D and 2D systems, illustrating how magnetic properties depend on the dimensionality and the coordination number. We have found that magnetic properties of constrictions differ from those appearing in bulk. In particular, spin fluctuations are considerable larger than those calculated for bulk materials. In addition, domain walls are strongly modified when a constriction is present, with a decrease of the domain-wall width. This decrease is explained in terms of previous theoretical works. [Spanish] Se presenta un algoritmo de Monte Carlo para estudiar sistemas discritos por un hamiltoniano anisotropico de Heisenburg. Esta tecnica ha sido probada exitosamente con sistemas de dos y tres dimensiones, ilustrado con las propiedades magneticas dependen de la dimensionalidad y el numero de coordinacion. Hemos encontrado que las propiedades magneticas de constricciones difieren de aquellas del bulto. En particular, las fluctuaciones de espin son considerablemente mayores. Ademas, las paredes de dominio son fuertemente modificadas cuando una construccion esta presente, originando un decrecimiento del ancho de la pared de dominio. Damos cuenta de este decrecimiento en terminos de un trabajo teorico previo.

  9. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    Science.gov (United States)

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.

  10. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution

    Energy Technology Data Exchange (ETDEWEB)

    Mukhopadhyay, Nitai D. [Department of Biostatistics, Virginia Commonwealth University, Richmond, VA 23298 (United States); Sampson, Andrew J. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Deniz, Daniel; Alm Carlsson, Gudrun [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Williamson, Jeffrey [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298 (United States); Malusek, Alexandr, E-mail: malusek@ujf.cas.cz [Department of Radiation Physics, Faculty of Health Sciences, Linkoeping University, SE 581 85 (Sweden); Department of Radiation Dosimetry, Nuclear Physics Institute AS CR v.v.i., Na Truhlarce 39/64, 180 86 Prague (Czech Republic)

    2012-01-15

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed.

  11. Techno-economic and Monte Carlo probabilistic analysis of microalgae biofuel production system.

    Science.gov (United States)

    Batan, Liaw Y; Graff, Gregory D; Bradley, Thomas H

    2016-11-01

    This study focuses on the characterization of the technical and economic feasibility of an enclosed photobioreactor microalgae system with annual production of 37.85 million liters (10 million gallons) of biofuel. The analysis characterizes and breaks down the capital investment and operating costs and the production cost of unit of algal diesel. The economic modelling shows total cost of production of algal raw oil and diesel of $3.46 and $3.69 per liter, respectively. Additionally, the effects of co-products' credit and their impact in the economic performance of algal-to-biofuel system are discussed. The Monte Carlo methodology is used to address price and cost projections and to simulate scenarios with probabilities of financial performance and profits of the analyzed model. Different markets for allocation of co-products have shown significant shifts for economic viability of algal biofuel system.

  12. 42 CFR 405.512 - Carriers' procedural terminology and coding systems.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with...

  13. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  14. Monte Carlo filters for identification of nonlinear structural dynamical systems

    Indian Academy of Sciences (India)

    C S Manohar; D Roy

    2006-08-01

    The problem of identification of parameters of nonlinear structures using dynamic state estimation techniques is considered. The process equations are derived based on principles of mechanics and are augmented by mathematical models that relate a set of noisy observations to state variables of the system. The set of structural parameters to be identified is declared as an additional set of state variables. Both the process equation and the measurement equations are taken to be nonlinear in the state variables and contaminated by additive and (or) multiplicative Gaussian white noise processes. The problem of determining the posterior probability density function of the state variables conditioned on all available information is considered. The utility of three recursive Monte Carlo simulation-based filters, namely, a probability density function-based Monte Carlo filter, a Bayesian bootstrap filter and a filter based on sequential importance sampling, to solve this problem is explored. The state equations are discretized using certain variations of stochastic Taylor expansions enabling the incorporation of a class of non-smooth functions within the process equations. Illustrative examples on identification of the nonlinear stiffness parameter of a Duffing oscillator and the friction parameter in a Coulomb oscillator are presented.

  15. Quantum Monte Carlo of atomic and molecular systems with heavy elements

    Science.gov (United States)

    Mitas, Lubos; Kulahlioglu, Adem; Melton, Cody; Bennett, Chandler

    2015-03-01

    We carry out quantum Monte Carlo calculations of atomic and molecular systems with several heavy atoms such as Mo, W and Bi. In particular, we compare the correlation energies vs their lighter counterparts in the same column of the periodic table in order to reveal trends with regard to the atomic number Z. One of the observations is that the correlation energy for the isoelectronic valence space/states is mildly decreasing with increasing Z. Similar observation applies also to the fixed-node errors, supporting thus our recent observation that the fixed-node error increases with electronic density for the same (or similar) complexity of the wave function and bonding. In addition, for Bi systems we study the impact of the spin-orbit on the electronic structure, in particular, on binding, correlation and excitation energies.

  16. Simulation of Cone Beam CT System Based on Monte Carlo Method

    CERN Document Server

    Wang, Yu; Cao, Ruifen; Hu, Liqin; Li, Bingbing

    2014-01-01

    Adaptive Radiation Therapy (ART) was developed based on Image-guided Radiation Therapy (IGRT) and it is the trend of photon radiation therapy. To get a better use of Cone Beam CT (CBCT) images for ART, the CBCT system model was established based on Monte Carlo program and validated against the measurement. The BEAMnrc program was adopted to the KV x-ray tube. Both IOURCE-13 and ISOURCE-24 were chosen to simulate the path of beam particles. The measured Percentage Depth Dose (PDD) and lateral dose profiles under 1cm water were compared with the dose calculated by DOSXYZnrc program. The calculated PDD was better than 1% within the depth of 10cm. More than 85% points of calculated lateral dose profiles was within 2%. The correct CBCT system model helps to improve CBCT image quality for dose verification in ART and assess the CBCT image concomitant dose risk.

  17. Miming the cancer-immune system competition by kinetic Monte Carlo simulations

    Science.gov (United States)

    Bianca, Carlo; Lemarchand, Annie

    2016-10-01

    In order to mimic the interactions between cancer and the immune system at cell scale, we propose a minimal model of cell interactions that is similar to a chemical mechanism including autocatalytic steps. The cells are supposed to bear a quantity called activity that may increase during the interactions. The fluctuations of cell activity are controlled by a so-called thermostat. We develop a kinetic Monte Carlo algorithm to simulate the cell interactions and thermalization of cell activity. The model is able to reproduce the well-known behavior of tumors treated by immunotherapy: the first apparent elimination of the tumor by the immune system is followed by a long equilibrium period and the final escape of cancer from immunosurveillance.

  18. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography.

    Science.gov (United States)

    Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  19. LOW RATE SPACE-TIME TRELLIS CODES IN POWER LIMITED WIRELESS COMMUNICATION SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Wu Gang; Chen Ming; Wang Haifeng; Cheng Shixin

    2002-01-01

    Space-time trellis codes can achieve the best tradeoff among bandwidth effciency,diversity gain, constellation size and trellis complexity. In this paper, some optimum low rate space-time trellis codes are proposed. Performance analysis and simulation show that the low rate space-time trellis codes outperform space-time block codes concatenated with convolutional code at the same bandwidth effciency, and are more suitable for the power limited wireless communication system.

  20. Analysis the Performance of Coded WSK-DWDM Transmission System

    Directory of Open Access Journals (Sweden)

    Bobby Barua

    2012-12-01

    Full Text Available Dense Wavelength Division Multiplexing (DWDM is the system with more than eight active wavelengths per fiber. Again high data rates as well as long spans between amplifiers in a chain require high optical power per channel to satisfy the signal to noise ratio (SNR requirements. So the DWDM systems with long repeater-less spans, the simultaneous requirements of high launched power and low dispersion fibers lead to the generation of new waves by four-wave mixing (FWM, which degrades the performance of a multi-channel transmission system. Several methods have been proposed to mitigate the effect of FWM crosstalk. All these works are performed considering only binary WSK scheme. Although M-ary WSK (M>2 schemes have higher spectral efficiency than binary WSK system. Again, the BER performances for M-ary WDM system are not satisfactory with the effect of FWM. Therefore, in this paper we include the effect of FWM on the performance of an M-ary WDM system and try to mitigate the effect by employing the energy efficient convolution code in a normal dispersive fiber as well as in a dispersion shifted fiber (DSF.

  1. Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey

    OpenAIRE

    UZUN, Vassilya; BILGIN, Sami

    2016-01-01

    For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospi...

  2. PERFORMANCE EVALUATION OF LOW DENSITY PARITY CHECK CODES FOR DIGITAL RADIO MONDIALE (DRM) SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In Digital Radio Mondiale (DRM) system, achieving good audio quality becomes a challenge due to its limited band-width of 9 or 10kHz and the very bad fading channels. Therefore, DRM needs highly efficient channel coding schemes. This paper, proposes the schemes which use the Low-Density Parity-Check (LDPC) coded Bit-Interleaved Coded Modulation (BICM) schemes for the implementation of DRM systems.Simulation results show that the proposed system is more efficient than the Rate Compatible Punctured Convolutional (RCPC) coded DRM system on various broadcast channels, and may be recommended as a coding technology for Digital Amplitude Modulation Broadcasting (DAMB) systems of China.

  3. Performance Evaluation of HARQ Technique with UMTS Turbo Code

    Directory of Open Access Journals (Sweden)

    S. S. Brkić

    2011-11-01

    Full Text Available The hybrid automatic repeat request technique (HARQ represents the error control principle which combines an error correcting code and automatic repeat request procedure (ARQ, within the same transmission system. In this paper, using Monte Carlo simulation process, the characteristics of HARQ technique are determined, for the case of the Universal Mobile Telecommunication System (UMTS turbo code.

  4. Development of multi-physics code systems based on the reactor dynamics code DYN3D

    Energy Technology Data Exchange (ETDEWEB)

    Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany); Schuetze, Jochen [ANSYS Germany GmbH, Darmstadt (Germany); Frank, Thomas [ANSYS Germany GmbH, Otterfing (Germany); Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany)

    2011-07-15

    The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)

  5. Verification of ARES transport code system with TAKEDA benchmarks

    Science.gov (United States)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  6. Hybrid Compton camera/coded aperture imaging system

    Science.gov (United States)

    Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  7. Biometric iris image acquisition system with wavefront coding technology

    Science.gov (United States)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  8. Validation and simulation of a regulated survey system through Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    Asier Lacasta Soto

    2015-07-01

    Full Text Available Channel flow covers long distances and obeys to variable temporal behaviour. It is usually regulated by hydraulic elements as lateralgates to provide a correct of water supply. The dynamics of this kind of flow is governed by a partial differential equations systemnamed shallow water model. They have to be complemented with a simplified formulation for the gates. All the set of equations forma non-linear system that can only be solved numerically. Here, an explicit upwind numerical scheme in finite volumes able to solveall type of flow regimes is used. Hydraulic structures (lateral gates formulation introduces parameters with some uncertainty. Hence,these parameters will be calibrated with a Monte Carlo algorithm obtaining associated coefficients to each gate. Then, they will bechecked, using real cases provided by the monitorizing equipment of the Pina de Ebro channel located in Zaragoza.

  9. Algorithm and application of Monte Carlo simulation for multi-dispersive copolymerization system

    Institute of Scientific and Technical Information of China (English)

    凌君; 沈之荃; 陈万里

    2002-01-01

    A Monte Carlo algorithm has been established for multi-dispersive copolymerization system, based on the experimental data of copolymer molecular weight and dispersion via GPC measurement. The program simulates the insertion of every monomer unit and records the structure and microscopical sequence of every chain in various lengths. It has been applied successfully for the ring-opening copolymerization of 2,2-dimethyltrimethylene carbonate (DTC) with δ-caprolactone (δ-CL). The simulation coincides with the experimental results and provides microscopical data of triad fractions, lengths of homopolymer segments, etc., which are difficult to obtain by experiments. The algorithm presents also a uniform frame for copolymerization studies under other complicated mechanisms.

  10. Monte Carlo Studies for the Calibration System of the GERDA Experiment

    CERN Document Server

    Baudis, Laura; Froborg, Francis; Tarka, Michal

    2013-01-01

    The GERmanium Detector Array, GERDA, searches for neutrinoless double beta decay in Ge-76 using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors gamma emitting sources have to be lowered from their parking position on top of the cryostat over more than five meters down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three Th-228 sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than four hours of calibration time. These sources will contribute to the background of the experiment with a total of (1.07 +/- 0.04(stat) +0.13 -0.19(sys)) 10^{-4} cts/(keV kg yr) when shielded from below with 6 cm of tantalum in the parking position.

  11. Monte Carlo simulation of glandular dose in a dedicated breast CT system

    Institute of Scientific and Technical Information of China (English)

    TANG Xiao; WEI Long; ZHAO Wei; WANG Yan-Fang; SHU Hang; SUN Cui-Li; WEI Cun-Feng; CAO Da-Quan; QUE Jie-Min; SHI Rong-Jian

    2012-01-01

    A dedicated breast CT system (DBCT) is a new method for breast cancer detection proposed in recent years.In this paper,the glandular dose in the DBCT is simulated using the Monte Carlo method.The phantom shape is half ellipsoid,and a series of phantoms with different sizes,shapes and compositions were constructed. In order to optimize the spectra,monoenergy X-ray beams of 5-80 keV were used in simulation.The dose distribution of a breast phantom was studied:a higher energy beam generated more uniform distribution,and the outer parts got more dose than the inner parts.For polyenergtic spectra,four spectra of Al filters with different thicknesses were simulated,and the polyenergtic glandular dose was calculated as a spectral weighted combination of the monoenergetic dose.

  12. A simple model of optimal population coding for sensory systems.

    Science.gov (United States)

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  13. Reevaluation of JACS code system benchmark analyses of the heterogeneous system. Fuel rods in U+Pu nitric acid solution system

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Tomoyuki; Miyoshi, Yoshinori; Katakura, Jun-ichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    In order to perform accuracy evaluation of the critical calculation by the combination of multi-group constant library MGCL and 3-dimensional Monte Carlo code KENO-IV among critical safety evaluation code system JACS, benchmark calculation was carried out from 1980 in 1982. Some cases where the neutron multiplication factor calculated in the heterogeneous system in it was less than 0.95 were seen. In this report, it re-calculated by considering the cause about the heterogeneous system of the U+Pu nitric acid solution systems containing the neutron poison shown in JAERI-M 9859. The present study has shown that the k{sub eff} value less than 0.95 given in JAERI-M 9859 is caused by the fact that the water reflector below a cylindrical container was not taken into consideration in the KENO-IV calculation model. By taking into the water reflector, the KENO-IV calculation gives a k{sub eff} value greater than 0.95 and a good agreement with the experiment. (author)

  14. Novel BCH Code Design for Mitigation of Phase Noise Induced Cycle Slips in DQPSK Systems

    DEFF Research Database (Denmark)

    Leong, M. Y.; Larsen, Knud J.; Jacobsen, G.

    2014-01-01

    We show that by proper code design, phase noise induced cycle slips causing an error floor can be mitigated for 28 Gbau d DQPSK systems. Performance of BCH codes are investigated in terms of required overhead......We show that by proper code design, phase noise induced cycle slips causing an error floor can be mitigated for 28 Gbau d DQPSK systems. Performance of BCH codes are investigated in terms of required overhead...

  15. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  16. A study on the interlink of CANDU safety analysis codes with development of GUI system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. J.; Jeo, Y. J.; Park, Q. C. [Seoul National Univ., Seoul (Korea, Republic of); Kim, H. T.; Min, B. J. [KAERI, Taejon (Korea, Republic of)

    2003-10-01

    In order to improve the CANDU safety analysis code system, the interlink of containment analysis code, PRESCON2 to the system thermal hydraulics analysis code, CATHENA, has been implemented with development of the GUI system. Before the GUI development, we partly corrected two codes to optimize on the PC environment. The interlink of two codes could be executed by introducing three interlinking variables, mass flux, mixture enthalpy, and mixture specific volume. To guarantee the robustness of the codes, two codes are extremely linked by using the GUI system. The GUI system provides much of user-friendly functions and will be improved step by step. This study is expected to improve the safety assessment system and technology for CANDU NPPs.

  17. PERFORMANCE ANALYSIS OF CHANNEL ESTIMATION FOR LDPC-CODED OFDM SYSTEM IN MULTIPATH FADING CHANNEL

    Institute of Scientific and Technical Information of China (English)

    Zhu Qi; Li Hao; Feng Guangzeng

    2006-01-01

    In this paper, the channel estimation techniques for Orthogonal Frequency Division Multiplexing (OFDM) systems based on pilot arrangement are studied and we apply Low Density Parity Check (LDPC) codes to the system of IEEE 802.16a with OFDM modulation. First investigated is the influence of channel estimation schemes on LDPC-code based OFDM system in static and multipath fading channels. According to the different propagation environments in 802.16a system, a dynamic channel estimation scheme is proposed.A good irregular LDPC code is designed with code rate of 1/2 and code length of 1200. Simulation results show that the performance of LDPC coded OFDM system proposed in this paper is better than that of the convolution Turbo coded OFDM system proposed in IEEE standard 802.16a.

  18. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  19. Inhomogeneity effect in Varian Trilogy Clinac iX 10 MV photon beam using EGSnrc and Geant4 code system

    Science.gov (United States)

    Yani, S.; Rhani, M. F.; Haryanto, F.; Arif, I.

    2016-08-01

    Treatment fields consist of tissue other than water equivalent tissue (soft tissue, bones, lungs, etc.). The inhomogeneity effect can be investigated by Monte Carlo (MC) simulation. MC simulation of the radiation transport in an absorbing medium is the most accurate method for dose calculation in radiotherapy. The aim of this work is to evaluate the effect of inhomogeneity phantom on dose calculations in photon beam radiotherapy obtained by different MC codes. MC code system EGSnrc and Geant4 was used in this study. Inhomogeneity phantom dimension is 39.5 × 30.5 × 30 cm3 and made of 4 material slices (12.5 cm water, 10 cm aluminium, 5 cm lung and 12.5 cm water). Simulations were performed for field size 4 × 4 cm2 at SSD 100 cm. The spectrum distribution Varian Trilogy Clinac iX 10 MV was used. Percent depth dose (PDD) and dose profile was investigated in this research. The effects of inhomogeneities on radiation dose distributions depend on the amount, density and atomic number of the inhomogeneity, as well as on the quality of the photon beam. Good agreement between dose distribution from EGSnrc and Geant4 code system in inhomogeneity phantom was observed, with dose differences around 5% and 7% for depth doses and dose profiles.

  20. Efficient implementation of the Monte Carlo method for lattice gauge theory calculations on the floating point systems FPS-164

    Energy Technology Data Exchange (ETDEWEB)

    Moriarty, K.J.M. (Royal Holloway Coll., Englefield Green (UK). Dept. of Mathematics); Blackshaw, J.E. (Floating Point Systems UK Ltd., Bracknell)

    1983-04-01

    The computer program calculates the average action per plaquette for SU(6)/Z/sub 6/ lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600.

  1. Multilevel LDPC Codes Design for Multimedia Communication CDMA System

    Directory of Open Access Journals (Sweden)

    Hou Jia

    2004-01-01

    Full Text Available We design multilevel coding (MLC with a semi-bit interleaved coded modulation (BICM scheme based on low density parity check (LDPC codes. Different from the traditional designs, we joined the MLC and BICM together by using the Gray mapping, which is suitable to transmit the data over several equivalent channels with different code rates. To perform well at signal-to-noise ratio (SNR to be very close to the capacity of the additive white Gaussian noise (AWGN channel, random regular LDPC code and a simple semialgebra LDPC (SA-LDPC code are discussed in MLC with parallel independent decoding (PID. The numerical results demonstrate that the proposed scheme could achieve both power and bandwidth efficiency.

  2. Coding of object location in the vibrissal thalamocortical system.

    Science.gov (United States)

    Yu, Chunxiu; Horev, Guy; Rubin, Naama; Derdikman, Dori; Haidarliu, Sebastian; Ahissar, Ehud

    2015-03-01

    In whisking rodents, object location is encoded at the receptor level by a combination of motor and sensory related signals. Recoding of the encoded signals can result in various forms of internal representations. Here, we examined the coding schemes occurring at the first forebrain level that receives inputs necessary for generating such internal representations--the thalamocortical network. Single units were recorded in 8 thalamic and cortical stations in artificially whisking anesthetized rats. Neuronal representations of object location generated across these stations and expressed in response latency and magnitude were classified based on graded and binary coding schemes. Both graded and binary coding schemes occurred across the entire thalamocortical network, with a general tendency of graded-to-binary transformation from thalamus to cortex. Overall, 63% of the neurons of the thalamocortical network coded object position in their firing. Thalamocortical responses exhibited a slow dynamics during which the amount of coded information increased across 4-5 whisking cycles and then stabilized. Taken together, the results indicate that the thalamocortical network contains dynamic mechanisms that can converge over time on multiple coding schemes of object location, schemes which essentially transform temporal coding to rate coding and gradual to labeled-line coding.

  3. Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.

    Science.gov (United States)

    Uzun, Vassilya; Bilgin, Sami

    2016-01-01

    For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.

  4. A Secure Code-Based Authentication Scheme for RFID Systems

    Directory of Open Access Journals (Sweden)

    Noureddine Chikouche

    2015-08-01

    Full Text Available Two essential problems are still posed in terms of Radio Frequency Identification (RFID systems, including: security and limitation of resources. Recently, Li et al.'s proposed a mutual authentication scheme for RFID systems in 2014, it is based on Quasi Cyclic-Moderate Density Parity Check (QC-MDPC McEliece cryptosystem. This cryptosystem is designed to reducing the key sizes. In this paper, we found that this scheme does not provide untraceability and forward secrecy properties. Furthermore, we propose an improved version of this scheme to eliminate existing vulnerabilities of studied scheme. It is based on the QC-MDPC McEliece cryptosystem with padding the plaintext by a random bit-string. Our work also includes a security comparison between our improved scheme and different code-based RFID authentication schemes. We prove secrecy and mutual authentication properties by AVISPA (Automated Validation of Internet Security Protocols and Applications tools. Concerning the performance, our scheme is suitable for low-cost tags with resource limitation.

  5. SEMI-BLIND CHANNEL ESTIMATION OF MULTIPLE-INPUT/MULTIPLE-OUTPUT SYSTEMS BASED ON MARKOV CHAIN MONTE CARLO METHODS

    Institute of Scientific and Technical Information of China (English)

    Jiang Wei; Xiang Haige

    2004-01-01

    This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.

  6. ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments

    Science.gov (United States)

    Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob

    2014-06-01

    The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.

  7. SAN CARLOS APACHE PAPERS.

    Science.gov (United States)

    ROESSEL, ROBERT A., JR.

    THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…

  8. Determination of phase equilibria in confined systems by open pore cell Monte Carlo method.

    Science.gov (United States)

    Miyahara, Minoru T; Tanaka, Hideki

    2013-02-28

    We present a modification of the molecular dynamics simulation method with a unit pore cell with imaginary gas phase [M. Miyahara, T. Yoshioka, and M. Okazaki, J. Chem. Phys. 106, 8124 (1997)] designed for determination of phase equilibria in nanopores. This new method is based on a Monte Carlo technique and it combines the pore cell, opened to the imaginary gas phase (open pore cell), with a gas cell to measure the equilibrium chemical potential of the confined system. The most striking feature of our new method is that the confined system is steadily led to a thermodynamically stable state by forming concave menisci in the open pore cell. This feature of the open pore cell makes it possible to obtain the equilibrium chemical potential with only a single simulation run, unlike existing simulation methods, which need a number of additional runs. We apply the method to evaluate the equilibrium chemical potentials of confined nitrogen in carbon slit pores and silica cylindrical pores at 77 K, and show that the results are in good agreement with those obtained by two conventional thermodynamic integration methods. Moreover, we also show that the proposed method can be particularly useful for determining vapor-liquid and vapor-solid coexistence curves and the triple point of the confined system.

  9. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  10. Computer code system for the R and D of nuclear fuel cycle with fast reactor. 5. Development and application of reactor analysis code system

    Energy Technology Data Exchange (ETDEWEB)

    Yokoyama, Kenji; Hazama, Taira; Chiba, Go; Ohki, Shigeo; Ishikawa, Makoto [Japan Nuclear Cycle Development Inst., Oarai, Ibaraki (Japan). Oarai Engineering Center

    2002-12-01

    In the core design of fast reactors (FRs), it is very important to improve the prediction accuracy of the nuclear characteristics for both reducing cost and ensuring reliability of FR plants. A nuclear reactor analysis code system for FRs has been developed by the Japan Nuclear Cycle Development Institute (JNC). This paper describes the outline of the calculation models and methods in the system consisting of several analysis codes, such as the cell calculation code CASUP, the core calculation code TRITAC and the sensitivity analysis code SAGEP. Some examples of verification results and improvement of the design accuracy are also introduced based on the measurement data from critical assemblies, e.g, the JUPITER experiment (USA/Japan), FCA (Japan), MASURCA (France), and BFS (Russia). Furthermore, application fields and future plans, such as the development of new generation nuclear constants and applications to MA{center_dot}FP transmutation, are described. (author)

  11. Environmental performance of green building code and certification systems.

    Science.gov (United States)

    Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua

    2014-01-01

    We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).

  12. Error correction coding for frequency-hopping multiple-access spread spectrum communication systems

    Science.gov (United States)

    Healy, T. J.

    1982-01-01

    A communication system which would effect channel coding for frequency-hopped multiple-access is described. It is shown that in theory coding can increase the spectrum utilization efficiency of a system with mutual interference to 100 percent. Various coding strategies are discussed and some initial comparisons are given. Some of the problems associated with implementing the type of system described here are discussed.

  13. Monte Carlo simulations of morphological transitions in PbTe/CdTe immiscible material systems

    Science.gov (United States)

    Mińkowski, Marcin; Załuska-Kotur, Magdalena A.; Turski, Łukasz A.; Karczewski, Grzegorz

    2016-09-01

    The crystal growth of the immiscible PbTe/CdTe multilayer system is analyzed as an example of a self-organizing process. The immiscibility of the constituents leads to the observed morphological transformations such as an anisotropy driven formation of quantum dots and nanowires and to a phase separation at the highest temperatures. The proposed model accomplishes a bulk and surface diffusion together with an anisotropic mobility of the material components. We analyze its properties by kinetic Monte Carlo simulations and show that it is able to reproduce all of the structures observed experimentally during the process of the PbTe/CdTe growth. We show that all of the dynamical processes studied play an important role in the creation of zero-, one-, two-, and, finally, three-dimensional structures. The shape of the structures that are grown is different for relatively thick multilayers, when the bulk diffusion cooperates with the anisotropic mobility, as compared to the annealed structures for which only the isotropic bulk diffusion decides about the process. Finally, it is different again for thin multilayers when the surface diffusion is the most decisive factor. We compare our results with the experimentally grown systems and show that the proposed model explains the diversity of observed structures.

  14. Kinetic Monte Carlo and cellular particle dynamics simulations of multicellular systems

    Science.gov (United States)

    Flenner, Elijah; Janosi, Lorant; Barz, Bogdan; Neagu, Adrian; Forgacs, Gabor; Kosztin, Ioan

    2012-03-01

    Computer modeling of multicellular systems has been a valuable tool for interpreting and guiding in vitro experiments relevant to embryonic morphogenesis, tumor growth, angiogenesis and, lately, structure formation following the printing of cell aggregates as bioink particles. Here we formulate two computer simulation methods: (1) a kinetic Monte Carlo (KMC) and (2) a cellular particle dynamics (CPD) method, which are capable of describing and predicting the shape evolution in time of three-dimensional multicellular systems during their biomechanical relaxation. Our work is motivated by the need of developing quantitative methods for optimizing postprinting structure formation in bioprinting-assisted tissue engineering. The KMC and CPD model parameters are determined and calibrated by using an original computational-theoretical-experimental framework applied to the fusion of two spherical cell aggregates. The two methods are used to predict the (1) formation of a toroidal structure through fusion of spherical aggregates and (2) cell sorting within an aggregate formed by two types of cells with different adhesivities.

  15. Improvements in the simulation of the efficiency of a HPGe detector with Monte Carlo code MCNP5; Mejoras en la simulacion de la eficiencia de un detector HPGe con el codigo Monte Carlo MCNP5

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo, S.; Querol, A.; Rodenas, J.; Verdu, G.

    2014-07-01

    in this paper we propose to perform a simulation model using the MCNP5 code and a registration form meshing to improve the simulation efficiency of the detector in the range of energies ranging from 50 to 2000 keV. This meshing is built by FMESH MCNP5 registration code that allows a mesh with cells of few microns. The photon and electron flow is calculated in the different cells of the mesh which is superimposed on detector geometry. It analyzes the variation of efficiency (related to the variation of energy deposited in the active volume). (Author)

  16. On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-08-31

    The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.

  17. 48 CFR 19.303 - Determining North American Industry Classification System (NAICS) codes and size standards.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 19.303 Section 19.303 Federal Acquisition... Classification System (NAICS) codes and size standards. (a) The contracting officer shall determine...

  18. The Virtual Monte Carlo

    CERN Document Server

    Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas

    2003-01-01

    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.

  19. Application of voxelised numerical phantoms linked to the M.C.N.P. Monte Carlo code to the realistic measurement in vivo of actinides in the lungs and contaminated wounds; Application des fantomes numeriques voxelises associes au code Monte Carlo MCNP a la mesure in vivo realiste des actinides dans les poumons et les plaies contaminees

    Energy Technology Data Exchange (ETDEWEB)

    Noelle, P

    2006-12-15

    In vivo lung counting, one of the preferred methods for monitoring people exposed to the risk of actinide inhalation, is nevertheless limited by the use of physical calibration phantoms which, for technical reasons, can only provide a rough representation of human tissue. A new approach to in vivo measurements has been developed to take advantage of advances in medical imaging and computing; this consists of numerical phantoms based on tomographic images (CT) or magnetic resonance images (R.M.I.) combined with Monte Carlo computing techniques. Under laboratory implementation of this innovative method using specific software called O.E.D.I.P.E., the main thrust of this thesis was to provide answers to the following question: what do numerical phantoms and new techniques like O.E.D.I.P.E. contribute to the improvement in calibration of low-energy in vivo counting systems? After a few developments of the O.E.D.I.P.E. interface, the numerical method was validated for systems composed of four germanium detectors, the most widespread configuration in radio bioassay laboratories (a good match was found, with less than 10% variation). This study represents the first step towards a person-specific numerical calibration of counting systems, which will improve assessment of the activity retained. A second stage focusing on an exhaustive evaluation of uncertainties encountered in in vivo lung counting was possible thanks to the approach offered by the previously-validated O.E.D.I.P.E. software. It was shown that the uncertainties suggested by experiments in a previous study were underestimated, notably morphological differences between the physical phantom and the measured person. Some improvements in the measurement procedure were then proposed, particularly new bio-metric equations specific to French measurement configurations that allow a more sensible choice of the calibration phantom, directly assessing the thickness of the torso plate to be added to the Livermore phantom

  20. Combining Total Monte Carlo and Benchmarks for Nuclear Data Uncertainty Propagation on a Lead Fast Reactor's Safety Parameters

    OpenAIRE

    Alhassan, Erwin; Sjöstrand, Henrik; Duan, Junfeng; Gustavsson, Cecilia; Koning, Arjan; Pomp, Stephan; Rochman, Dimitri; Österlund, Michael

    2014-01-01

    Analyses are carried out to assess the impact of nuclear data uncertainties on some reactor safety parameters for the European Lead Cooled Training Reactor (ELECTRA) using the Total Monte Carlo method. A large number of Pu-239 random ENDF-format libraries, generated using the TALYS based system were processed into ACE format with NJOY99.336 code and used as input into the Serpent Monte Carlo code to obtain distribution in reactor safety parameters. The distribution in keff obtained was compar...

  1. Analysis of the KUCA MEU experiments using the ANL code system

    Energy Technology Data Exchange (ETDEWEB)

    Shiroya, S.; Hayashi, M.; Kanda, K.; Shibata, T.; Woodruff, W.L.; Matos, J.E.

    1982-01-01

    This paper provides some preliminary results on the analysis of the KUCA critical experiments using the ANL code system. Since this system was employed in the earlier neutronics calculations for the KUHFR, it is important to assess its capabilities for the KUHFR. The KUHFR has a unique core configuration which is difficult to model precisely with current diffusion theory codes. This paper also provides some results from a finite-element diffusion code (2D-FEM-KUR), which was developed in a cooperative research program between KURRI and JAERI. This code provides the capability for mockup of a complex core configuration as the KUHFR. Using the same group constants generated by the EPRI-CELL code, the results of the 2D-FEM-KUR code are compared with the finite difference diffusion code (DIF3D(2D) which is mainly employed in this analysis.

  2. Code Optimization in FORM

    CERN Document Server

    Kuipers, J; Vermaseren, J A M

    2013-01-01

    We describe the implementation of output code optimization in the open source computer algebra system FORM. This implementation is based on recently discovered techniques of Monte Carlo tree search to find efficient multivariate Horner schemes, in combination with other optimization algorithms, such as common subexpression elimination. For systems for which no specific knowledge is provided it performs significantly better than other methods we could compare with. Because the method has a number of free parameters, we also show some methods by which to tune them to different types of problems.

  3. Evaluation of the thermal neutron flux in the core of IPEN/MB-01 reactor using the code Monte Carlo (MCNP)

    Energy Technology Data Exchange (ETDEWEB)

    Salome, Jean A.D.; Cardoso, Fabiano; Faria, Rochkhudson B.; Pereira, Claubia, E-mail: jadsalome@yahoo.com.br, E-mail: fabinuclear@yahoo.com.br, E-mail: rockdefaria@yahoo.com.br, E-mail: claubia@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2015-07-01

    The IPEN/MB-01 reactor, located in the city of Sao Paulo - Brazil, reached its first criticality on the year of 1988. The reactor is characterized by a low output power of 100 W only, even because its purpose is to produce knowledge about nuclear power plants on a smaller geometric scale without the requirement of an extremely complex cooling system. The use of devices such as this it is very interesting because it achieves the demands of nuclear engineering about the neutronic parameters needed in the design of large nuclear plants through relatively simple and inexpensive methods. In this paper, the computational mathematical code MCNP5 is used to perform the calculation of the thermal neutron flux in the core of the IPEN/MB-01 reactor. To do this is used an experiment from the LEU-COMP-THERM-077 benchmark that represents the standard rectangular configuration of the IPEN/MB-01 reactor. The thermal neutron flux is calculated at some axial planes of different heights and, after that, axial profiles of the thermal neutron flux are done and compared to experimental results issued previously. The experimental values used as reference refer to a cylindrical configuration of the core of the reactor. Finally, the pertinence and relevance of the results are checked. With this work is expected to produce more knowledge about the dynamics of neutron flux in the core of the IPEN/MB-01 reactor. (author)

  4. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Slattery, Stuart R., E-mail: slatterysr@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Evans, Thomas M., E-mail: evanstm@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Wilson, Paul P.H., E-mail: wilsonp@engr.wisc.edu [University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)

    2015-12-15

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. In general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.

  5. Feature Article: Understanding strongly correlated many-body systems with quantum Monte Carlo simulations

    Science.gov (United States)

    Lavalle, Catia; Rigol, Marcos; Muramatsu, Alejandro

    2005-08-01

    The cover picture of the current issue, taken from the Feature Article [1], depicts the evolution of local density (a) and its quantum fluctuations (b) in trapped fermions on one-dimensional optical lattices. As the number of fermions in the trap is increased, figure (a) shows the formation of a Mott-insulating plateau (local density equal to one) whereas the quantum fluctuations - see figure (b) - are strongly suppressed, but nonzero. For a larger number of fermions new insulating plateaus appear (this time with local density equal to two), but no density fluctuations. Regions with non-constant density are metallic and exhibit large quantum fluctuations of the density.The first author Catia Lavalle is a Postdoc at the University of Stuttgart. She works in the field of strongly correlated quantum systems by means of Quantum Monte Carlo methods (QMC). While working on her PhD thesis at the University of Stuttgart, she developed a new QMC technique that allows to study dynamical properties of the t-J model.

  6. Phase equilibria of the system methane-ethane from temperature scaling Gibbs Ensemble Monte Carlo simulation

    Science.gov (United States)

    Zhang, Zhigang; Duan, Zhenhao

    2002-10-01

    A new technique of temperature scaling method combined with the conventional Gibbs Ensemble Monte Carlo simulation was used to study liquid-vapor phase equilibria of the methane-ethane (CH 4-C 2H 6) system. With this efficient method, a new set of united-atom Lennard-Jones potential parameters for pure C 2H 6 was found to be more accurate than those of previous models in the prediction of phase equilibria. Using the optimized potentials for liquid simulations (OPLS) potential for CH 4 and the potential of this study for C 2H 6, together with a simple mixing rule, we simulated the equilibrium compositions and densities of the CH 4-C 2H 6 mixtures with accuracy close to experiments. The simulated data are supplements to experiments, and may cover a larger temperature-pressure-composition space than experiments. Compared with some well-established equations of state such as Peng-Robinson equation of state (PR-EQS), the simulated results are found to be closer to experiments, at least in some temperature and pressure ranges.

  7. High-Speed Turbo-TCM-Coded Orthogonal Frequency-Division Multiplexing Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    Wang Yanxia

    2006-01-01

    Full Text Available One of the UWB proposals in the IEEE P802.15 WPAN project is to use a multiband orthogonal frequency-division multiplexing (OFDM system and punctured convolutional codes for UWB channels supporting a data rate up to 480 Mbps. In this paper, we improve the proposed system using turbo TCM with QAM constellation for higher data rate transmission. We construct a punctured parity-concatenated trellis codes, in which a TCM code is used as the inner code and a simple parity-check code is employed as the outer code. The result shows that the system can offer a much higher spectral efficiency, for example, 1.2 Gbps, which is 2.5 times higher than the proposed system. We identify several essential requirements to achieve the high rate transmission, for example, frequency and time diversity and multilevel error protection. Results are confirmed by density evolution.

  8. Metropolis Methods for Quantum Monte Carlo Simulations

    OpenAIRE

    Ceperley, D. M.

    2003-01-01

    Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...

  9. The research of breakdown structure and coding system for construction project

    Institute of Scientific and Technical Information of China (English)

    丁大勇; 金维兴; 李培

    2004-01-01

    Whether the breakdown structure and coding system of construction projects are reasonable or not determines to a large degree the pepfofmance level of the entire project management. We analyze in detail the similarities and differences of two kinds of decomposing methods classified by type of work and construction elements based on the discussion of international typical coding standards system designing. We then deduce the differential coefficient relation between project breakdown strueture(PBS) and work breakdown structure (WBS). At the same time we constitute a comprehensive construction project breakdown system including element code and type of work code and make a further schematic presentation of the implementation of the sysrem' s functions.

  10. Validation of a Monte Carlo simulation of the Philips Allegro/GEMINI PET systems using GATE

    Energy Technology Data Exchange (ETDEWEB)

    Lamare, F; Turzo, A; Bizais, Y; Rest, C Cheze Le; Visvikis, D [U650 INSERM, Laboratoire du Traitement de l' information medicale (LaTIM), CHU Morvan, Universite de Bretagne Occidentale, Brest, 29609 (France)

    2006-02-21

    A newly developed simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop a Monte Carlo simulation of a fully three-dimensional (3D) clinical PET scanner. The Philips Allegro/GEMINI PET systems were simulated in order to (a) allow a detailed study of the parameters affecting the system's performance under various imaging conditions, (b) study the optimization and quantitative accuracy of emission acquisition protocols for dynamic and static imaging, and (c) further validate the potential of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The accuracy of the developed detection model was tested through the comparison of simulated and measured results obtained with the Allegro/GEMINI systems for a number of NEMA NU2-2001 performance protocols including spatial resolution, sensitivity and scatter fraction. In addition, an approximate model of the system's dead time at the level of detected single events and coincidences was developed in an attempt to simulate the count rate related performance characteristics of the scanner. The developed dead-time model was assessed under different imaging conditions using the count rate loss and noise equivalent count rates performance protocols of standard and modified NEMA NU2-2001 (whole body imaging conditions) and NEMA NU2-1994 (brain imaging conditions) comparing simulated with experimental measurements obtained with the Allegro/GEMINI PET systems. Finally, a reconstructed image quality protocol was used to assess the overall performance of the developed model. An agreement of <3% was obtained in scatter fraction, with a difference between 4% and 10% in the true and random coincidence count rates respectively, throughout a range of activity concentrations and under various imaging conditions, resulting in <8% differences between simulated and measured noise equivalent count rates performance. Finally, the image

  11. Channel coding for underwater acoustic single-carrier CDMA communication system

    Science.gov (United States)

    Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong

    2017-01-01

    CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.

  12. Rotated Walsh-Hadamard Spreading with Robust Channel Estimation for a Coded MC-CDMA System

    Directory of Open Access Journals (Sweden)

    Raulefs Ronald

    2004-01-01

    Full Text Available We investigate rotated Walsh-Hadamard spreading matrices for a broadband MC-CDMA system with robust channel estimation in the synchronous downlink. The similarities between rotated spreading and signal space diversity are outlined. In a multiuser MC-CDMA system, possible performance improvements are based on the chosen detector, the channel code, and its Hamming distance. By applying rotated spreading in comparison to a standard Walsh-Hadamard spreading code, a higher throughput can be achieved. As combining the channel code and the spreading code forms a concatenated code, the overall minimum Hamming distance of the concatenated code increases. This asymptotically results in an improvement of the bit error rate for high signal-to-noise ratio. Higher convolutional channel code rates are mostly generated by puncturing good low-rate channel codes. The overall Hamming distance decreases significantly for the punctured channel codes. Higher channel code rates are favorable for MC-CDMA, as MC-CDMA utilizes diversity more efficiently compared to pure OFDMA. The application of rotated spreading in an MC-CDMA system allows exploiting diversity even further. We demonstrate that the rotated spreading gain is still present for a robust pilot-aided channel estimator. In a well-designed system, rotated spreading extends the performance by using a maximum likelihood detector with robust channel estimation at the receiver by about 1 dB.

  13. Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) with Neutron Protection Factor Measurements of an Iron Box

    Science.gov (United States)

    2014-03-27

    want to express my sincere love, respect, and admiration for my wife, who motivated and supported me throughout this long endeavor; this document ...widely utilized radiation transport code is MCNP. First created at Los Alamos National Laboratory ( LANL ) in 1957, the code simulated neutral...explanation of the current capabilities of MCNP will occur within the next chapter of this document ; however, it is important to note that MCNP

  14. EELQMS - the European quality management system for engine lubricants - the ATC Code of Practice; EELQMS - Das Europaeische Qualitaets-Management System fuer Motorenoele - der ATC Code of Practice

    Energy Technology Data Exchange (ETDEWEB)

    Raddatz, J.H.; Eberan-Eberhorst, C.G.A. von

    1998-01-01

    In 1995 the ATC developed a Code of Practice which, in conjunction with the ATIEL Code of Practice, represents the basis for the European Engine Lubricant Quality Management System (EELQMS). Compliance with the requirements of this system is a prerequisite for performance claims made by engine oil marketers regarding the European ACEA Engine Oil Sequences. TAD, the German section of the Technical Committee of Petroleum Additive Manufacturers in Europe (ATC), has prepared this presentation in order to promote the dialogue between the industries concerned and to provide information on EELQMS and the ATC Code of Practice to a broader audience. Key elements of the paper are: - What is EELQMS? - How does EELQMS work? - What is the role of the ATC Code of Practice in EELQMS? - What are the most important rules of the ATC Code of Practice? - What benefits do EELQMS and the ATC Code of Practice offer to the end-user? - What is the current status of EELQMS? We hope that this presentation will help to promote a better understanding and acceptance of EELQMS on a broad basis. (orig.) [Deutsch] Im Jahre 1995 hat der ATC eine Code of Practice entwickelt, der in Verbindung mit dem ATIEL Code of Practice die Grundlage des Europaeischen Qualitaets-Management-Systems fuer Motoroele (European Engine Lubricant Quality Management System=EELQMS) ist. Die Einhaltung der in diesem System spezifizierten Regeln ist Voraussetzung fuer die Erfuellung der ACEA-Richtlinien und der entsprechenden Performance-Aussagen nach den jeweiligen europaeischen ACEA-Motorenoelsequenzen. Zur Vertiefung des Dialogs zwischen den beteiligten Industrien und zur Verbreitung der Kenntnisse ueber EELQMS und den ATC Code of Practice hat die TAD, die deutsche nationale Organisation innerhalb des europaeischen Dachverbandes der Additivindustrie (ATC), folgende Praesentation ausgearbeitet. Wesentliche Elemente der Praesentation sind: - Was ist EELQMS? - Wie funktioniert EELQMS? - Welche Rolle spielt der ATC Code of

  15. Validation of system codes for plant application on selected experiments

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Marco K.; Risken, Tobias; Agethen, Kathrin; Bratfisch, Christoph [Bochum Univ. (Germany). Reactor Simulation and Safety Group

    2016-05-15

    For decades, the Reactor Simulation and Safety Group at Ruhr-Universitaet Bochum (RUB) contributes to nuclear safety by computer code validation and model development for nuclear safety analysis. Severe accident analysis codes are relevant tools for the understanding and the development of accident management measures. The accidents in the plants Three Mile Island (USA) in 1979 and Fukushima Daiichi (Japan) in 2011 influenced these research activities significantly due to the observed phenomena, such as molten core concrete interaction and hydrogen combustion. This paper gives a brief outline of recent research activities at RUB in the named fields, contributing to code preparation for plant applications. Simulations of the molten core concrete interaction tests CCI-2 and CCI-3 with ASTEC and the hydrogen combustion test Ix9 with COCOSYS are presented exemplarily. Additionally, the application on plants is demonstrated on chosen results of preliminary Fukushima calculations.

  16. LDPC concatenated space-time block coded system in multipath fading environment: Analysis and evaluation

    Directory of Open Access Journals (Sweden)

    Surbhi Sharma

    2011-06-01

    Full Text Available Irregular low-density parity-check (LDPC codes have been found to show exceptionally good performance for single antenna systems over a wide class of channels. In this paper, the performance of LDPC codes with multiple antenna systems is investigated in flat Rayleigh and Rician fading channels for different modulation schemes. The focus of attention is mainly on the concatenation of irregular LDPC codes with complex orthogonal space-time codes. Iterative decoding is carried out with a density evolution method that sets a threshold above which the code performs well. For the proposed concatenated system, the simulation results show that the QAM technique achieves a higher coding gain of 8.8 dB and 3.2 dB over the QPSK technique in Rician (LOS and Rayleigh (NLOS faded environments respectively.

  17. SEACC: the systems engineering and analysis computer code for small wind systems

    Energy Technology Data Exchange (ETDEWEB)

    Tu, P.K.C.; Kertesz, V.

    1983-03-01

    The systems engineering and analysis (SEA) computer program (code) evaluates complete horizontal-axis SWECS performance. Rotor power output as a function of wind speed and energy production at various wind regions are predicted by the code. Efficiencies of components such as gearbox, electric generators, rectifiers, electronic inverters, and batteries can be included in the evaluation process to reflect the complete system performance. Parametric studies can be carried out for blade design characteristics such as airfoil series, taper rate, twist degrees and pitch setting; and for geometry such as rotor radius, hub radius, number of blades, coning angle, rotor rpm, etc. Design tradeoffs can also be performed to optimize system configurations for constant rpm, constant tip speed ratio and rpm-specific rotors. SWECS energy supply as compared to the load demand for each hour of the day and during each session of the year can be assessed by the code if the diurnal wind and load distributions are known. Also available during each run of the code is blade aerodynamic loading information.

  18. A Monte Carlo Analysis of the Thrust Imbalance for the Space Launch System Booster During Both the Ignition Transient and Steady State Operation

    Science.gov (United States)

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.

  19. Monte Carlo simulation to positron emitter standardized by means of 4pibeta-gamma coincidence system--application to 22Na.

    Science.gov (United States)

    Dias, Mauro S; Tongu, Margareth L O; Takeda, Mauro N; Koskinas, Marina F

    2010-01-01

    The present work describes the methodology for predicting the behavior of extrapolation curves obtained in radionuclide standardization by 4pibeta-gamma coincidence measurements, applied to (22)Na, developed at the Laboratório de Metrologia Nuclear of IPEN-CNEN/SP (LMN-Nuclear Metrology Laboratory). The LMN system consists of a proportional counter (PC) in 4pi geometry coupled to a single or a pair of NaI(Tl) scintillation crystals. Two standardization techniques were used: the Sum-Peak and the Nuclear-Peak methods. The theoretical response functions of each detector have been calculated using the MCNPX Monte Carlo code. The code ESQUEMA, developed at LMN, has been used for calculating the extrapolation curve in the 4pibeta-gamma coincidence experiment. Modifications were performed in order to include response tables for positrons and coincidences with annihilation photons. From the calibration results it was possible to extract both the activity value and the positron emission probability per decay. The latter was compared with results from the literature.

  20. A Coding System for Qualitative Studies of the Information-Seeking Process in Computer Science Research

    Science.gov (United States)

    Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela

    2015-01-01

    Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…