Generation of organic scintillators response function for fast neutrons using the Monte Carlo method
International Nuclear Information System (INIS)
Mazzaro, A.C.
1979-01-01
A computer program (DALP) in Fortran-4-G language, has been developed using the Monte Carlo method to simulate the experimental techniques leading to the distribution of pulse heights due to monoenergetic neutrons reaching an organic scintillator. The calculation of the pulse height distribution has been done for two different systems: 1) Monoenergetic neutrons from a punctual source reaching the flat face of a cylindrical organic scintillator; 2) Environmental monoenergetic neutrons randomly reaching either the flat or curved face of the cylindrical organic scintillator. The computer program has been developed in order to be applied to the NE-213 liquid organic scintillator, but can be easily adapted to any other kind of organic scintillator. With this program one can determine the pulse height distribution for neutron energies ranging from 15 KeV to 10 MeV. (Author) [pt
The GENIE neutrino Monte Carlo generator
International Nuclear Information System (INIS)
Andreopoulos, C.; Bell, A.; Bhattacharya, D.; Cavanna, F.; Dobson, J.; Dytman, S.; Gallagher, H.; Guzowski, P.; Hatcher, R.; Kehayias, P.; Meregaglia, A.; Naples, D.; Pearce, G.; Rubbia, A.; Whalley, M.; Yang, T.
2010-01-01
GENIE is a new neutrino event generator for the experimental neutrino physics community. The goal of the project is to develop a 'canonical' neutrino interaction physics Monte Carlo whose validity extends to all nuclear targets and neutrino flavors from MeV to PeV energy scales. Currently, emphasis is on the few-GeV energy range, the challenging boundary between the non-perturbative and perturbative regimes, which is relevant for the current and near future long-baseline precision neutrino experiments using accelerator-made beams. The design of the package addresses many challenges unique to neutrino simulations and supports the full life-cycle of simulation and generator-related analysis tasks. GENIE is a large-scale software system, consisting of ∼120000 lines of C++ code, featuring a modern object-oriented design and extensively validated physics content. The first official physics release of GENIE was made available in August 2007, and at the time of the writing of this article, the latest available version was v2.4.4.
Study on random number generator in Monte Carlo code
International Nuclear Information System (INIS)
Oya, Kentaro; Kitada, Takanori; Tanaka, Shinichi
2011-01-01
The Monte Carlo code uses a sequence of pseudo-random numbers with a random number generator (RNG) to simulate particle histories. A pseudo-random number has its own period depending on its generation method and the period is desired to be long enough not to exceed the period during one Monte Carlo calculation to ensure the correctness especially for a standard deviation of results. The linear congruential generator (LCG) is widely used as Monte Carlo RNG and the period of LCG is not so long by considering the increasing rate of simulation histories in a Monte Carlo calculation according to the remarkable enhancement of computer performance. Recently, many kinds of RNG have been developed and some of their features are better than those of LCG. In this study, we investigate the appropriate RNG in a Monte Carlo code as an alternative to LCG especially for the case of enormous histories. It is found that xorshift has desirable features compared with LCG, and xorshift has a larger period, a comparable speed to generate random numbers, a better randomness, and good applicability to parallel calculation. (author)
Response decomposition with Monte Carlo correlated coupling
International Nuclear Information System (INIS)
Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L.
2001-01-01
Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)
Response decomposition with Monte Carlo correlated coupling
Energy Technology Data Exchange (ETDEWEB)
Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.
2001-07-01
Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)
PEPSI — a Monte Carlo generator for polarized leptoproduction
Mankiewicz, L.; Schäfer, A.; Veltri, M.
1992-09-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.
PEPSI - a Monte Carlo generator for polarized leptoproduction
International Nuclear Information System (INIS)
Mankiewicz, L.
1992-01-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the Lepto 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S . PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons. (orig.)
The Monte Carlo event generator DPMJET-III
International Nuclear Information System (INIS)
Roesler, S.; Engel, R.
2001-01-01
A new version of the Monte Carlo event generator DPMJET is presented. It is a code system based on the Dual Parton Model and unifies all features of the DTUNUC-2, DPMJET-II and PHOJET1.12 event generators. DPMJET-III allows the simulation of hadron-hadron, hadron-nucleus, nucleus-nucleus, photon-hadron, photon-photon and photon-nucleus interactions from a few GeV up to the highest cosmic ray energies. (orig.)
Weight window/importance generator for Monte Carlo streaming problems
International Nuclear Information System (INIS)
Booth, T.E.
1983-01-01
A Monte Carlo method for solving highly angle dependent streaming problems is described. The method uses a DXTRAN-like angle biasing scheme, a space-angle weight window to reduce weight fluctuations introduced by the angle biasing, and a space-angle importance generator to set parameters for the space-angle weight window. Particle leakage through a doubly-bent duct is calculated to demonstrate the method's use
A Monte Carlo program for generating hadronic final states
International Nuclear Information System (INIS)
Angelini, L.; Pellicoro, M.; Nitti, L.; Preparata, G.; Valenti, G.
1991-01-01
FIRST is a computer program to generate final states from high energy hadronic interactions using the Monte Carlo technique. It is based on a theoretical model in which the high degree of universality in such interactions is related with the existence of highly excited quark-antiquark bound states, called fire-strings. The program handles the decay of both fire-strings and unstable particles produced in the intermediate states. (orig.)
Random number generators tested on quantum Monte Carlo simulations.
Hongo, Kenta; Maezono, Ryo; Miura, Kenichi
2010-08-01
We have tested and compared several (pseudo) random number generators (RNGs) applied to a practical application, ground state energy calculations of molecules using variational and diffusion Monte Carlo metheds. A new multiple recursive generator with 8th-order recursion (MRG8) and the Mersenne twister generator (MT19937) are tested and compared with the RANLUX generator with five luxury levels (RANLUX-[0-4]). Both MRG8 and MT19937 are proven to give the same total energy as that evaluated with RANLUX-4 (highest luxury level) within the statistical error bars with less computational cost to generate the sequence. We also tested the notorious implementation of linear congruential generator (LCG), RANDU, for comparison. (c) 2010 Wiley Periodicals, Inc.
Improved diffusion coefficients generated from Monte Carlo codes
International Nuclear Information System (INIS)
Herman, B. R.; Forget, B.; Smith, K.; Aviles, B. N.
2013-01-01
Monte Carlo codes are becoming more widely used for reactor analysis. Some of these applications involve the generation of diffusion theory parameters including macroscopic cross sections and diffusion coefficients. Two approximations used to generate diffusion coefficients are assessed using the Monte Carlo code MC21. The first is the method of homogenization; whether to weight either fine-group transport cross sections or fine-group diffusion coefficients when collapsing to few-group diffusion coefficients. The second is a fundamental approximation made to the energy-dependent P1 equations to derive the energy-dependent diffusion equations. Standard Monte Carlo codes usually generate a flux-weighted transport cross section with no correction to the diffusion approximation. Results indicate that this causes noticeable tilting in reconstructed pin powers in simple test lattices with L2 norm error of 3.6%. This error is reduced significantly to 0.27% when weighting fine-group diffusion coefficients by the flux and applying a correction to the diffusion approximation. Noticeable tilting in reconstructed fluxes and pin powers was reduced when applying these corrections. (authors)
Direct aperture optimization for IMRT using Monte Carlo generated beamlets
International Nuclear Information System (INIS)
Bergman, Alanah M.; Bush, Karl; Milette, Marie-Pierre; Popescu, I. Antoniu; Otto, Karl; Duzenli, Cheryl
2006-01-01
This work introduces an EGSnrc-based Monte Carlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as Monte Carlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate Monte Carlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5x5.0 mm 2 beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final Monte Carlo calculation with MLC modeling is used to compute the final dose distribution. Monte Carlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is ∼33% compared to fluence-based optimization methods
Academic Training: Monte Carlo generators for the LHC
Françoise Benz
2005-01-01
2004-2005 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 4, 5, 6, 7 April from 11.00 to 12.00 hrs - Main Auditorium, bldg. 500 Monte Carlo generators for the LHC T. SJOSTRAND / CERN-PH, Lund Univ. SE Event generators today are indispensable as tools for the modelling of complex physics processes, that jointly lead to the production of hundreds of particles per event at LHC energies. Generators are used to set detector requirements, to formulate analysis strategies, or to calculate acceptance corrections. These lectures describe the physics that goes into the construction of an event generator, such as hard processes, initial- and final-state radiation, multiple interactions and beam remnants, hadronization and decays, and how these pieces come together. The current main generators are introduced, and are used to illustrate uncertainties in the physics modelling. Some trends for the future are outlined. ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch
Studying the information content of TMDs using Monte Carlo generators
Energy Technology Data Exchange (ETDEWEB)
Avakian, H. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Matevosyan, H. [The Univ. of Adelaide, Adelaide (Australia); Pasquini, B. [Univ. of Pavia, Pavia (Italy); Schweitzer, P. [Univ. of Connecticut, Storrs, CT (United States)
2015-02-05
Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.
Neutron monitor generated data distributions in quantum variational Monte Carlo
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Monte Carlo generation of dosimetric parameters for eye plaque dosimetry
International Nuclear Information System (INIS)
Cutajar, D.L.; Green, J.A.; Guatelli, S.; Rosenfeld, A.B.
2010-01-01
Full text: The Centre for Medical Radiation Physics have undertaken the dcvelopment of a quality assurance tool, using silicon pixelated detectors, for the calibration of eye plaques prior to insertion. Dosimetric software to correlate the measured and predicted dose rates has been constructed. The dosimetric parameters within the software, for both 1-125 and Ru-I 06 based eye plaques, were optimised using the Geant4 Monte Carlo toolkit. Methods For 1-125 based plaques, an novel application was developed to generate TG-43 parameters for any seed input. TG-43 parameters were generated for an Oncura model 6711 seed, with data points every millimetre up to 25 mm in the radial direction, and every 5 degrees in polar angle, and correlated to published data. For the Ru106 based plaques, an application was developed to generate dose rates about a Bebig model CCD plaque. Toroids were used to score the deposited dose, taking advantage of the cylindrical symmetry of the plaque, with radii in millimetre increments up to 25 mm, and depth from the plaque surface in millimetre increments up to 25 mm. Results TheTG43 parameters generated for the 6711 seed correlate well with published TG43 data at the given intervals, with radial dose function within 3%, and anisotropy function within 5% for angles greater than 30 degrees. The Ru-l 06 plaque data correlated well with the Bebig protocol of measurement. Conclusion Geant4 is a useful Monte Carlo tool for the generation of dosimetric data for eye plaque dosimetry. which may improve the quality assurance of eye plaque treatment. (author)
Testing random number generators for Monte Carlo applications
International Nuclear Information System (INIS)
Sim, L.H.
1992-01-01
Central to any system for modelling radiation transport phenomena using Monte Carlo techniques is the method by which pseudo random numbers are generated. This method is commonly referred to as the Random Number Generator (RNG). It is usually a computer implemented mathematical algorithm which produces a series of numbers uniformly distributed on the interval [0,1]. If this series satisfies certain statistical tests for randomness, then for practical purposes the pseudo random numbers in the series can be considered to be random. Tests of this nature are important not only for new RNGs but also to test the implementation of known RNG algorithms in different computer environments. Six RNGs have been tested using six statistical tests and one visual test. The statistical tests are the moments, frequency (digit and number), serial, gap, and poker tests. The visual test is a simple two dimensional ordered pair display. In addition the RNGs have been tested in a specific Monte Carlo application. This type of test is often overlooked, however it is important that in addition to satisfactory performance in statistical tests, the RNG be able to perform effectively in the applications of interest. The RNGs tested here are based on a variety of algorithms, including multiplicative and linear congruential, lagged Fibonacci, and combination arithmetic and lagged Fibonacci. The effect of the Bays-Durham shuffling algorithm on the output of a known bad RNG has also been investigated. 18 refs., 11 tabs., 4 figs. of
Foam A General Purpose Cellular Monte Carlo Event Generator
Jadach, Stanislaw
2003-01-01
A general purpose, self-adapting, Monte Carlo (MC) event generator (simulator) is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be $n$-dimensional simplices, hyperrectangles or Cartesian product of them. The grid of cells, called ``foam'', is produced in the process of the binary split of the cells. The choice of the next cell to be divided and the position/direction of the division hyper-plane is driven by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution. As any MC generator, it can also be used for the MC integration. With the typical personal computer CPU, the program is able to perform adaptive integration/simulation at relatively small number of dimensions ($\\leq 16$). With the continu...
Monte-Carlo event generation for the LHC
Siegert, Frank
This thesis discusses recent developments for the simulation of particle physics in the light of the start-up of the Large Hadron Collider. Simulation programs for fully exclusive events, dubbed Monte-Carlo event generators, are improved in areas related to the perturbative as well as non-perturbative regions of strong interactions. A short introduction to the main principles of event generation is given to serve as a basis for the following discussion. An existing algorithm for the correction of parton-shower emissions with the help of exact tree-level matrix elements is revisited and significantly improved as attested by first results. In a next step, an automated implementation of the POWHEG method is presented. It allows for the combination of parton showers with full next-to-leading order QCD calculations and has been tested in several processes. These two methods are then combined into a more powerful framework which allows to correct a parton shower with full next-to-leading order matrix elements and h...
PEPSI: a Monte Carlo generator for polarized leptoproduction
International Nuclear Information System (INIS)
Mankiewicz, L.
1992-01-01
We describe PEPSI (Polarized Electron Proton Scattering Interactions) a Monte Carlo program for the polarized deep inelastic leptoproduction mediated by electromagnetic interaction. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering and requires the standard polarization-independent JETSET routines to perform fragmentation into final hadrons. (orig.)
Monte Carlos of the new generation: status and progress
International Nuclear Information System (INIS)
Frixione, Stefano
2005-01-01
Standard parton shower monte carlos are designed to give reliable descriptions of low-pT physics. In the very high-energy regime of modern colliders, this is may lead to largely incorrect predictions of the basic reaction processes. This motivated the recent theoretical efforts aimed at improving monte carlos through the inclusion of matrix elements computed beyond the leading order in QCD. I briefly review the progress made, and discuss bottom production at the Tevatron
Dielectric response of periodic systems from quantum Monte Carlo calculations.
Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola
2005-11-11
We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.
Shielding evaluation of neutron generator hall by Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Pujala, U.; Selvakumaran, T.S.; Baskaran, R.; Venkatraman, B. [Radiological Safety Division, Indira Gandhi Center for Atomic Research, Kalpakkam (India); Thilagam, L.; Mohapatra, D.K., E-mail: swathythila2@yahoo.com [Safety Research Institute, Atomic Energy Regulatory Board, Kalpakkam (India)
2017-04-01
A shielded hall was constructed for accommodating a D-D, D-T or D-Be based pulsed neutron generator (NG) with 4π yield of 10{sup 9} n/s. The neutron shield design of the facility was optimized using NCRP-51 methodology such that the total dose rates outside the hall areas are well below the regulatory limit for full occupancy criterion (1 μSv/h). However, the total dose rates at roof top, cooling room trench exit and labyrinth exit were found to be above this limit for the optimized design. Hence, additional neutron shielding arrangements were proposed for cooling room trench and labyrinth exits. The roof top was made inaccessible. The present study is an attempt to evaluate the neutron and associated capture gamma transport through the bulk shields for the complete geometry and materials of the NG-Hall using Monte Carlo (MC) codes MCNP and FLUKA. The neutron source terms of D-D, D-T and D-Be reactions are considered in the simulations. The effect of additional shielding proposed has been demonstrated through the simulations carried out with the consideration of the additional shielding for D-Be neutron source term. The results MC simulations using two different codes are found to be consistent with each other for neutron dose rate estimates. However, deviation up to 28% is noted between these two codes at few locations for capture gamma dose rate estimates. Overall, the dose rates estimated by MC simulations including additional shields shows that all the locations surrounding the hall satisfy the full occupancy criteria for all three types of sources. Additionally, the dose rates due to direct transmission of primary neutrons estimated by FLUKA are compared with the values calculated using the formula given in NCRP-51 which shows deviations up to 50% with each other. The details of MC simulations and NCRP-51 methodology for the estimation of primary neutron dose rate along with the results are presented in this paper. (author)
Monte-Carlo approach to the generation of adversary paths
International Nuclear Information System (INIS)
1977-01-01
This paper considers the definition of a threat as the sequence of events that might lead to adversary success. A nuclear facility is characterized as a weighted, labeled, directed graph, with critical adversary paths. A discrete-event, Monte-Carlo simulation model is used to estimate the probability of the critical paths. The model was tested for hypothetical facilities, with promising results
Generation of Random Numbers and Parallel Random Number Streams for Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
L. Yu. Barash
2012-01-01
Full Text Available Modern methods and libraries for high quality pseudorandom number generation and for generation of parallel random number streams for Monte Carlo simulations are considered. The probability equidistribution property and the parameters when the property holds at dimensions up to logarithm of mesh size are considered for Multiple Recursive Generators.
Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units
Demchik, Vadim
2011-03-01
Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is presented.
International Nuclear Information System (INIS)
Iga, Y.; Hamatsu, R.; Yamazaki, S.
1988-01-01
The Monte Carlo event generator for high energy hadron-nucleus (h-A) collisions has been developed which is based on the multi-chain model. The concept of formation zone and the cascade interactions of secondary particles are properly taken into account in this Monte Carlo code. Comparing the results of this code with experimental data, the importance of intranuclear cascade interactions becomes very clear. (orig.)
A Monte-Carlo method for ex-core neutron response
International Nuclear Information System (INIS)
Gamino, R.G.; Ward, J.T.; Hughes, J.C.
1997-10-01
A Monte Carlo neutron transport kernel capability primarily for ex-core neutron response is described. The capability consists of the generation of a set of response kernels, which represent the neutron transport from the core to a specific ex-core volume. This is accomplished by tagging individual neutron histories from their initial source sites and tracking them throughout the problem geometry, tallying those that interact in the geometric regions of interest. These transport kernels can subsequently be combined with any number of core power distributions to determine detector response for a variety of reactor Thus, the transport kernels are analogous to an integrated adjoint response. Examples of pressure vessel response and ex-core neutron detector response are provided to illustrate the method
Monte Carlo modeling of the Fastscan whole body counter response
International Nuclear Information System (INIS)
Graham, H.R.; Waller, E.J.
2015-01-01
Monte Carlo N-Particle (MCNP) was used to make a model of the Fastscan for the purpose of calibration. Two models were made one for the Pickering Nuclear Site, and one for the Darlington Nuclear Site. Once these models were benchmarked and found to be in good agreement, simulations were run to study the effect different sized phantoms had on the detected response, and the shielding effect of torso fat was not negligible. Simulations into the nature of a source being positioned externally on the anterior or posterior of a person were also conducted to determine a ratio that could be used to determine if a source is externally or internally placed. (author)
The GENIE Neutrino Monte Carlo Generator: Physics and User Manual
Energy Technology Data Exchange (ETDEWEB)
Andreopoulos, Costas [Univ. of Liverpool (United Kingdom). Dept. of Physics; Science and Technology Facilities Council (STFC), Oxford (United Kingdom). Rutherford Appleton Lab. (RAL). Particle Physics Dept.; Barry, Christopher [Univ. of Liverpool (United Kingdom). Dept. of Physics; Dytman, Steve [Univ. of Pittsburgh, PA (United States). Dept. of Physics and Astronomy; Gallagher, Hugh [Tufts Univ., Medford, MA (United States). Dept. of Physics and Astronomy; Golan, Tomasz [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Univ. of Rochester, NY (United States). Dept. of Physics and Astronomy; Hatcher, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Perdue, Gabriel [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Yarba, Julia [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
2015-10-20
GENIE is a suite of products for the experimental neutrino physics community. This suite includes i) a modern software framework for implementing neutrino event generators, a state-of-the-art comprehensive physics model and tools to support neutrino interaction simulation for realistic experimental setups (the Generator product), ii) extensive archives of neutrino, charged-lepton and hadron scattering data and software to produce a comprehensive set of data/MC comparisons (the Comparisons product), and iii) a generator tuning framework and fitting applications (the Tuning product). This book provides the definite guide for the GENIE Generator: It presents the software architecture and a detailed description of its physics model and official tunes. In addition, it provides a rich set of data/MC comparisons that characterise the physics performance of GENIE. Detailed step-by-step instructions on how to install and configure the Generator, run its applications and analyze its outputs are also included.
International Nuclear Information System (INIS)
Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L; Soares, Edward J; Lemahieu, Ignace; Glick, Stephen J
2006-01-01
In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast
Application of MCAM in generating Monte Carlo model for ITER port limiter
International Nuclear Information System (INIS)
Lu Lei; Li Ying; Ding Aiping; Zeng Qin; Huang Chenyu; Wu Yican
2007-01-01
On the basis of the pre-processing and conversion functions supplied by MCAM (Monte-Carlo Particle Transport Calculated Automatic Modeling System), this paper performed the generation of ITER Port Limiter MC (Monte-Carlo) calculation model from the CAD engineering model. The result was validated by using reverse function of MCAM and MCNP PLOT 2D cross-section drawing program. the successful application of MCAM to ITER Port Limiter demonstrates that MCAM is capable of dramatically increasing the efficiency and accuracy to generate MC calculation models from CAD engineering models with complex geometry comparing with the traditional manual modeling method. (authors)
International Nuclear Information System (INIS)
Pazianotto, Mauricio T.; Carlson, Brett V.; Federico, Claudio A.; Gonzalez, Odair L.
2011-01-01
Neutrons generated by the interaction of cosmic rays with the atmosphere make an important contribution to the dose accumulated in electronic circuits and aircraft crew members at flight altitude. High-energy neutrons are produced in spallation reactions and intranuclear cascade processes by primary cosmic-ray particle interactions with atoms in the atmosphere. These neutrons can produce secondary neutrons and also undergo a moderation process due to atmosphere interactions, resulting in a wider energy spectrum, ranging from thermal energies (0.025 eV) to energies of several hundreds of MeV. The Long-Counter (LC) detector is a widely used neutron detector designed to measure the directional flux of neutrons with about constant response over a wide energy range (thermal to 20 MeV). ). Its calibration process and the determination of its energy response for the wide-energy of cosmic ray induced neutron spectrum is a very difficult process due to the lack of installations with these capabilities. The goal of this study is to assess the behavior of the response of a Long Counter using the Monte Carlo (MC) computational code MCNPX (Monte Carlo N-Particle eXtended). The dependence of the Long Counter response on the angle of incidence, as well as on the neutron energy, will be carefully investigated, compared with the experimental data previously obtained with 241 Am-Be and 252 Cf neutron sources and extended to the neutron spectrum produced by cosmic rays. (Author)
Radiative corrections and Monte Carlo generators for physics at flavor factories
Directory of Open Access Journals (Sweden)
Montagna Guido
2016-01-01
Full Text Available I review the state of the art of precision calculations and related Monte Carlo generators used in physics at flavor factories. The review describes the tools relevant for the measurement of the hadron production cross section (via radiative return, energy scan and in γγ scattering, luminosity monitoring, searches for new physics and physics of the τ lepton.
New capabilities for Monte Carlo simulation of deuteron transport and secondary products generation
International Nuclear Information System (INIS)
Sauvan, P.; Sanz, J.; Ogando, F.
2010-01-01
Several important research programs are dedicated to the development of facilities based on deuteron accelerators. In designing these facilities, the definition of a validated computational approach able to simulate deuteron transport and evaluate deuteron interactions and production of secondary particles with acceptable precision is a very important issue. Current Monte Carlo codes, such as MCNPX or PHITS, when applied for deuteron transport calculations use built-in semi-analytical models to describe deuteron interactions. These models are found unreliable in predicting neutron and photon generated by low energy deuterons, typically present in those facilities. We present a new computational tool, resulting from an extension of the MCNPX code, which improve significantly the treatment of problems where any secondary product (neutrons, photons, tritons, etc.) generated by low energy deuterons reactions could play a major role. Firstly, it handles deuteron evaluated data libraries, which allow describing better low deuteron energy interactions. Secondly, it includes a reduction variance technique for production of secondary particles by charged particle-induced nuclear interactions, which allow reducing drastically the computing time needed in transport and nuclear response calculations. Verification of the computational tool is successfully achieved. This tool can be very helpful in addressing design issues such as selection of the dedicated neutron production target and accelerator radioprotection analysis. It can be also helpful to test the deuteron cross-sections under development in the frame of different international nuclear data programs.
Efficient pseudo-random number generation for monte-carlo simulations using graphic processors
Mohanty, Siddhant; Mohanty, A. K.; Carminati, F.
2012-06-01
A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.
Efficient pseudo-random number generation for Monte-Carlo simulations using graphic processors
International Nuclear Information System (INIS)
Mohanty, Siddhant; Mohanty, A K; Carminati, F
2012-01-01
A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.
Ghersi, Dario; Parakh, Abhishek; Mezei, Mihaly
2017-12-05
Four pseudorandom number generators were compared with a physical, quantum-based random number generator using the NIST suite of statistical tests, which only the quantum-based random number generator could successfully pass. We then measured the effect of the five random number generators on various calculated properties in different Markov-chain Monte Carlo simulations. Two types of systems were tested: conformational sampling of a small molecule in aqueous solution and liquid methanol under constant temperature and pressure. The results show that poor quality pseudorandom number generators produce results that deviate significantly from those obtained with the quantum-based random number generator, particularly in the case of the small molecule in aqueous solution setup. In contrast, the widely used Mersenne Twister pseudorandom generator and a 64-bit Linear Congruential Generator with a scrambler produce results that are statistically indistinguishable from those obtained with the quantum-based random number generator. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Monte-Carlo Generation of Time Evolving Fission Chains
Energy Technology Data Exchange (ETDEWEB)
Verbeke, Jerome M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kim, Kenneth S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Prasad, Manoj K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Snyderman, Neal J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2013-08-01
About a decade ago, a computer code was written to model neutrons from their “birth” to their final “death” in thermal neutron detectors (^{3}He tubes): SrcSim had enough physics to track the neutrons in multiplying systems, appropriately increasing and decreasing the neutron population as they interacted by absorption, fission and leakage. The theory behind the algorithms assumed that all neutrons produced in a fission chain were all produced simultaneously, and then diffused to the neutron detectors. For cases where the diffusion times are long compared to the fission chains, SrcSim is very successful. Indeed, it works extraordinarily well for thermal neutron detectors and bare objects, because it takes tens of microseconds for fission neutrons to slow down to thermal energies, where they can be detected. Microseconds are a very long time compared to the lengths of the fission chains. However, this inherent assumption in the theory prevents its use to cases where either the fission chains are long compared to the neutron diffusion times (water-cooled nuclear reactors, or heavily moderated object, where the theory starts failing), or the fission neutrons can be detected shortly after they were produced (fast neutron detectors). For these cases, a new code needs to be written, where the underlying assumption is not made. The purpose of this report is to develop an algorithm to generate the arrival times of neutrons in fast neutron detectors, starting from a neutron source such as a spontaneous fission source (^{252}Cf) or a multiplying source (Pu). This code will be an extension of SrcSim to cases where correlations between neutrons in the detectors are on the same or shorter time scales as the fission chains themselves.
The CCFM Monte Carlo generator CASCADE Version 2.2.03
International Nuclear Information System (INIS)
Jung, H.; Baranov, S.; Deak, M.; Grebenyuk, A.; Hentschinski, M.; Knutsson, A.; Kraemer, M.; Hautmann, F.; Kutak, K.; Lipatov, A.; Zotov, N.
2010-01-01
Cascade is a full hadron level Monte Carlo event generator for ep, γp and p anti p and pp processes, which uses the CCFM evolution equation for the initial state cascade in a backward evolution approach supplemented with off-shell matrix elements for the hard scattering. A detailed program description is given, with emphasis on parameters the user wants to change and common block variables which completely specify the generated events. (orig.)
The CCFM Monte Carlo Generator CASCADE version 2.2.0
Energy Technology Data Exchange (ETDEWEB)
Jung, H. [DESY, Hamburg (Germany); Antwerp Univ. (Belgium); Baranov, S. [Lebedev Physics Inst. (Russian Federation); Deak, M. [Madrid Univ. (ES). Inst. de Fisica Teorica UAM/CSIC] (and others)
2010-08-15
CASCADE is a full hadron level Monte Carlo event generator for ep, {gamma}p and p anti p and pp processes, which uses the CCFM evolution equation for the initial state cascade in a backward evolution approach supplemented with off - shell matrix elements for the hard scattering. A detailed program description is given, with emphasis on parameters the user wants to change and common block variables which completely specify the generated events. (orig.)
Validation of Monte Carlo event generators in the ATLAS Collaboration for LHC Run 2
The ATLAS collaboration
2016-01-01
This note reviews the main steps followed by the ATLAS Collaboration to validate the properties of particle-level simulated events from Monte Carlo event generators in order to ensure the correctness of all event generator configurations and production samples used in physics analyses. A central validation procedure is adopted which permits the continual validation of the functionality and the performance of the ATLAS event simulation infrastructure. Revisions and updates of the Monte Carlo event generators are also monitored. The methodology behind the validation and tools developed for that purpose, as well as various usage cases, are presented. The strategy has proven to play an essential role in identifying possible problems or unwanted features within a restricted timescale, verifying their origin and pointing to possible bug fixes before full-scale processing is initiated.
Generation and Verification of ENDF/B-VII.0 Cross section Libraries for Monte Carlo Calculations
International Nuclear Information System (INIS)
Park, Ho Jin; Kwak, Min Su; Joo, Han Gyu; Kim, Chang Hyo
2007-01-01
For Monte Carlo neutronics calculations, a continuous energy nuclear data library is needed. It can be generated from various evaluated nuclear data files such as ENDF/B using the ACER routine of the NJOY.code after a series of prior processing involving various other NJOY routines. Recently, a utility code, which generates the NJOY input decks in an automated mode, named ANJOYMC became available. The use of this code greatly reduces the user's effort and the possibility of input errors. In December 2006, the initial version of the ENDF/BVII nuclear data library was released. It was reported that the new data files have much better data which reduces the errors noted in the previous versions. Thus it is worthwhile to examine the performance of the new data files particularly using an independent Monte Carlo code, MCCARD and the ANJOYMC utility code. The verification of the newly generated library can be readily performed by analyzing numerous standard criticality benchmark problems
Monte Carlo simulations of a D-T neutron generator shielding for landmine detection
International Nuclear Information System (INIS)
Reda, A.M.
2011-01-01
Shielding for a D-T sealed neutron generator has been designed using the MCNP5 Monte Carlo radiation transport code. The neutron generator will be used in field for the detection of explosives, landmines, drugs and other 'threat' materials. The optimization of the detection of buried objects was started by studying the signal-to-noise ratio for different geometric conditions. - Highlights: → A landmine detection system based on neutron fast/slow analysis has been designed. → Shielding for a D-T sealed neutron generator tube has been designed using Monte Carlo radiation transport code. → Detection of buried objects was started by studying the signal-to-noise ratio for different geometric conditions. → The signal-to-background ratio optimized at one position for all depths.
International Nuclear Information System (INIS)
Valentine, T.E.; Mihalczo, J.T.
1996-01-01
One primary concern for design of safety systems for reactors is the time response of external detectors to changes in the core. This paper describes a way to estimate the time delay between the core power production and the external detector response using Monte Carlo calculations and suggests a technique to measure the time delay. The Monte Carlo code KENO-NR was used to determine the time delay between the core power production and the external detector response for a conceptual design of the Advanced Neutron Source (ANS) reactor. The Monte Carlo estimated time delay was determined to be about 10 ms for this conceptual design of the ANS reactor
Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator
International Nuclear Information System (INIS)
Shypailo, R.J.; Ellis, K.J.
2009-01-01
Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)
1989-01-01
We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.
Energy Technology Data Exchange (ETDEWEB)
Robles Pimentel, Edgar [Instituto de Investigaciones Electricas, Cuernavaca (Mexico); Garcia Hernandez, Javier [Comision Federal de Electricidad, Mexico, D. F. (Mexico)
1997-12-31
In November 1995, the failure of the Unit 2 generator at the hydroelectric central station Ingeniero Carlos Ramirez Ulloa, El Caracol, occurred. The accident forced to carry out its overhaul. Here are presented the technical problems faced during the overhaul of the generator and analyzed the implemented solutions. [Espanol] En noviembre de 1995 ocurrio la falla del generador de la unidad 2 de la central hidroelectrica Ing. Carlos Ramirez Ulloa, El Caracol. El accidente obligo a llevar a cabo su rehabilitacion. Se presentan los problemas tecnicos enfrentados durante la rehabilitacion del generador y se discuten las soluciones implementadas.
Energy Technology Data Exchange (ETDEWEB)
Robles Pimentel, Edgar [Instituto de Investigaciones Electricas, Cuernavaca (Mexico); Garcia Hernandez, Javier [Comision Federal de Electricidad, Mexico, D. F. (Mexico)
1998-12-31
In November 1995, the failure of the Unit 2 generator at the hydroelectric central station Ingeniero Carlos Ramirez Ulloa, El Caracol, occurred. The accident forced to carry out its overhaul. Here are presented the technical problems faced during the overhaul of the generator and analyzed the implemented solutions. [Espanol] En noviembre de 1995 ocurrio la falla del generador de la unidad 2 de la central hidroelectrica Ing. Carlos Ramirez Ulloa, El Caracol. El accidente obligo a llevar a cabo su rehabilitacion. Se presentan los problemas tecnicos enfrentados durante la rehabilitacion del generador y se discuten las soluciones implementadas.
International Nuclear Information System (INIS)
Both, J.P.; Nimal, J.C.; Vergnaud, T.
1990-01-01
We discuss an automated biasing procedure for generating the parameters necessary to achieve efficient Monte Carlo biasing shielding calculations. The biasing techniques considered here are exponential transform and collision biasing deriving from the concept of the biased game based on the importance function. We use a simple model of the importance function with exponential attenuation as the distance to the detector increases. This importance function is generated on a three-dimensional mesh including geometry and with graph theory algorithms. This scheme is currently being implemented in the third version of the neutron and gamma ray transport code TRIPOLI-3. (author)
Study of variants for Monte Carlo generators of τ→3πν decays
Energy Technology Data Exchange (ETDEWEB)
Wąs, Zbigniew; Zaremba, Jakub, E-mail: jakub.zaremba@ifj.edu.pl [Institute of Nuclear Physics, PAN, ul. Radzikowskiego 152, Kraków (Poland)
2015-11-28
Low energy QCD (below 2 GeV) is a region of resonance dynamics, sometimes lacking a satisfactory description as compared to the precision of available experimental data. Hadronic τ decays offer a probe for such an energy regime. In general, the predictions for decays are model dependent, with parameters fitted to experimental results. The parameterizations differ by the amount of assumptions and theoretical requirements taken into account. Both model distributions and acquired data samples used for the fits are the results of a complex effort. In this paper, we investigate the main parameterizations of τ decay matrix elements for the one- and three-prong channels of three-pion τ decays. The differences in analytical forms of the currents and resulting distributions used for comparison with the experimental data are studied. We use invariant mass spectra of all possible pion pairs and the whole three-pion system. Also three-dimensional histograms spanned over all distinct squared invariant masses are used to represent the results of models and experimental data. We present distributions from TAUOLA Monte Carlo generation and a semi-analytical calculation. These are necessary steps in the development for fitting in an as model-independent way as possible, and to explore multi-million event experimental data samples. This includes the response of distributions to model variants, and/or numerical values of the parameters. The interference effects of the currents’ parts are also studied. For technical purposes, weighted events are introduced. Even though we focus on 3πν{sub τ} modes, technical aspects of our study are relevant for all τ decay modes into three hadrons.
Study of variants for Monte Carlo generators of τ → 3πν decays
Energy Technology Data Exchange (ETDEWEB)
Was, Zbigniew; Zaremba, Jakub [PAN, Institute of Nuclear Physics, Krakow (Poland)
2015-11-15
Low energy QCD (below 2 GeV) is a region of resonance dynamics, sometimes lacking a satisfactory description as compared to the precision of available experimental data. Hadronic τ decays offer a probe for such an energy regime. In general, the predictions for decays are model dependent, with parameters fitted to experimental results. The parameterizations differ by the amount of assumptions and theoretical requirements taken into account. Both model distributions and acquired data samples used for the fits are the results of a complex effort. In this paper, we investigate the main parameterizations of τ decays. The differences in analytical forms of the currents and resulting distributions used for comparison with the experimental data are studied. We use invariant mass spectra of all possible pion pairs and the whole three-pion system. Also three-dimensional histograms spanned over all distinct squared invariant masses are used to represent the results of models and experimental data. We present distributions from TAUOLA Monte Carlo generation and a semi-analytical calculation. These are necessary steps in the development for fitting in an as model-independent way as possible, and to explore multi-million event experimental data samples. This includes the response of distributions to model variants, and/or numerical values of the parameters. The interference effects of the currents' parts are also studied. For technical purposes, weighted events are introduced. Even though we focus on 3πν{sub τ} modes, technical aspects of our study are relevant for all τ decay modes into three hadrons. (orig.)
Development of ANJOYMC Program for Automatic Generation of Monte Carlo Cross Section Libraries
International Nuclear Information System (INIS)
Kim, Kang Seog; Lee, Chung Chan
2007-03-01
The NJOY code developed at Los Alamos National Laboratory is to generate the cross section libraries in ACE format for the Monte Carlo codes such as MCNP and McCARD by processing the evaluated nuclear data in ENDF/B format. It takes long time to prepare all the NJOY input files for hundreds of nuclides with various temperatures, and there can be some errors in the input files. In order to solve these problems, ANJOYMC program has been developed. By using a simple user input deck, this program is not only to generate all the NJOY input files automatically, but also to generate a batch file to perform all the NJOY calculations. The ANJOYMC program is written in Fortran90 and can be executed under the WINDOWS and LINUX operating systems in Personal Computer. Cross section libraries in ACE format can be generated in a short time and without an error by using a simple user input deck
Energy Technology Data Exchange (ETDEWEB)
Campuzano Martinez, Ignacio Roberto; Gonzalez Vazquez, Alejandro Esteban; Robles Pimentel, Edgar Guillermo; Esparza Saucedo, Marcos; Garcia Martinez, Javier; Sanchez Flores, Ernesto; Martinez Romero, Jose Luis [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)
1999-12-31
The Hydroelectric Ing. Carlos Ramirez Ulloa Power Central has three 200 MW electric generators. The Central initiated its commercial operation in 1985. The electric generators had design problems that were properly corrected in an overhaul program that was initiated in 1996, with Unit 2 electric generator and completed in 1998 with Unit 1 electric generator. This paper presents the relevant aspects of the experience accumulated in the project. [Espanol] La central hidroelectrica Ing. Carlos Ramirez Ulloa cuenta con tres generadores de 200 MW cada uno. La central inicio su operacion comercial en 1985. Los generadores tenian problemas de diseno que fueron debidamente corregidos en un programa de rehabilitacion que inicio en 1996, con el generador de la unidad 2, y culmino en 1998 con el generador de la unidad 1. En este articulo se presentan los aspectos relevantes de la experiencia acumulada en el proyecto.
Energy Technology Data Exchange (ETDEWEB)
Campuzano Martinez, Ignacio Roberto; Gonzalez Vazquez, Alejandro Esteban; Robles Pimentel, Edgar Guillermo; Esparza Saucedo, Marcos; Garcia Martinez, Javier; Sanchez Flores, Ernesto; Martinez Romero, Jose Luis [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)
1998-12-31
The Hydroelectric Ing. Carlos Ramirez Ulloa Power Central has three 200 MW electric generators. The Central initiated its commercial operation in 1985. The electric generators had design problems that were properly corrected in an overhaul program that was initiated in 1996, with Unit 2 electric generator and completed in 1998 with Unit 1 electric generator. This paper presents the relevant aspects of the experience accumulated in the project. [Espanol] La central hidroelectrica Ing. Carlos Ramirez Ulloa cuenta con tres generadores de 200 MW cada uno. La central inicio su operacion comercial en 1985. Los generadores tenian problemas de diseno que fueron debidamente corregidos en un programa de rehabilitacion que inicio en 1996, con el generador de la unidad 2, y culmino en 1998 con el generador de la unidad 1. En este articulo se presentan los aspectos relevantes de la experiencia acumulada en el proyecto.
Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan
2016-04-01
We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.
Life prediction of steam generator tubing due to stress corrosion crack using Monte Carlo Simulation
International Nuclear Information System (INIS)
Hu Jun; Liu Fei; Cheng Guangxu; Zhang Zaoxiao
2011-01-01
Highlights: → A life prediction model for SG tubing was proposed. → The initial crack length for SCC was determined. → Two failure modes called rupture mode and leak mode were considered. → A probabilistic life prediction code based on Monte Carlo method was developed. - Abstract: The failure of steam generator tubing is one of the main accidents that seriously affects the availability and safety of a nuclear power plant. In order to estimate the probability of the failure, a probabilistic model was established to predict the whole life-span and residual life of steam generator (SG) tubing. The failure investigated was stress corrosion cracking (SCC) after the generation of one through-wall axial crack. Two failure modes called rupture mode and leak mode based on probabilistic fracture mechanics were considered in this proposed model. It took into account the variance in tube geometry and material properties, and the variance in residual stresses and operating conditions, all of which govern the propagations of cracks. The proposed model was numerically calculated by using Monte Carlo Simulation (MCS). The plugging criteria were first verified and then the whole life-span and residual life of the SG tubing were obtained. Finally, important sensitivity analysis was also carried out to identify the most important parameters affecting the life of SG tubing. The results will be useful in developing optimum strategies for life-cycle management of the feedwater system in nuclear power plants.
Monte Carlo Depletion with Critical Spectrum for Assembly Group Constant Generation
International Nuclear Information System (INIS)
Park, Ho Jin; Joo, Han Gyu; Shim, Hyung Jin; Kim, Chang Hyo
2010-01-01
The conventional two-step procedure has been used in practical nuclear reactor analysis. In this procedure, a deterministic assembly transport code such as HELIOS and CASMO is normally to generate multigroup flux distribution to be used in few-group cross section generation. Recently there are accuracy issues related with the resonance treatment or the double heterogeneity (DH) treatment for VHTR fuel blocks. In order to mitigate the accuracy issues, Monte Carlo (MC) methods can be used as an alternative way to generate few-group cross sections because the accuracy of the MC calculations benefits from its ability to use continuous energy nuclear data and detailed geometric information. In an earlier work, the conventional methods of obtaining multigroup cross sections and the critical spectrum are implemented into the McCARD Monte Carlo code. However, it was not complete in that the critical spectrum is not reflected in the depletion calculation. The purpose of this study is to develop a method to apply the critical spectrum to MC depletion calculations to correct for the leakage effect in the depletion calculation and then to examine the MC based group constants within the two-step procedure by comparing the two-step solution with the direct whole core MC depletion result
International Nuclear Information System (INIS)
El Bitar, Z; Pino, F; Candela, C; Ros, D; Pavía, J; Rannou, F R; Ruibal, A; Aguiar, P
2014-01-01
It is well-known that in pinhole SPECT (single-photon-emission computed tomography), iterative reconstruction methods including accurate estimations of the system response matrix can lead to submillimeter spatial resolution. There are two different methods for obtaining the system response matrix: those that model the system analytically using an approach including an experimental characterization of the detector response, and those that make use of Monte Carlo simulations. Methods based on analytical approaches are faster and handle the statistical noise better than those based on Monte Carlo simulations, but they require tedious experimental measurements of the detector response. One suggested approach for avoiding an experimental characterization, circumventing the problem of statistical noise introduced by Monte Carlo simulations, is to perform an analytical computation of the system response matrix combined with a Monte Carlo characterization of the detector response. Our findings showed that this approach can achieve high spatial resolution similar to that obtained when the system response matrix computation includes an experimental characterization. Furthermore, we have shown that using simulated detector responses has the advantage of yielding a precise estimate of the shift between the point of entry of the photon beam into the detector and the point of interaction inside the detector. Considering this, it was possible to slightly improve the spatial resolution in the edge of the field of view. (paper)
International Nuclear Information System (INIS)
Vithayasrichareon, Peerapat; MacGill, Iain F.
2012-01-01
This paper presents a novel decision-support tool for assessing future generation portfolios in an increasingly uncertain electricity industry. The tool combines optimal generation mix concepts with Monte Carlo simulation and portfolio analysis techniques to determine expected overall industry costs, associated cost uncertainty, and expected CO 2 emissions for different generation portfolio mixes. The tool can incorporate complex and correlated probability distributions for estimated future fossil-fuel costs, carbon prices, plant investment costs, and demand, including price elasticity impacts. The intent of this tool is to facilitate risk-weighted generation investment and associated policy decision-making given uncertainties facing the electricity industry. Applications of this tool are demonstrated through a case study of an electricity industry with coal, CCGT, and OCGT facing future uncertainties. Results highlight some significant generation investment challenges, including the impacts of uncertain and correlated carbon and fossil-fuel prices, the role of future demand changes in response to electricity prices, and the impact of construction cost uncertainties on capital intensive generation. The tool can incorporate virtually any type of input probability distribution, and support sophisticated risk assessments of different portfolios, including downside economic risks. It can also assess portfolios against multi-criterion objectives such as greenhouse emissions as well as overall industry costs. - Highlights: ► Present a decision support tool to assist generation investment and policy making under uncertainty. ► Generation portfolios are assessed based on their expected costs, risks, and CO 2 emissions. ► There is tradeoff among expected cost, risks, and CO 2 emissions of generation portfolios. ► Investment challenges include economic impact of uncertainties and the effect of price elasticity. ► CO 2 emissions reduction depends on the mix of
Bioethanol and power from integrated second generation biomass: A Monte Carlo simulation
International Nuclear Information System (INIS)
Osaki, Márcia R.; Seleghim, Paulo
2017-01-01
Highlights: • The impacts of integrating new sugarcane conversion using bagasse and straw. • Industrial conversion of sugarcane into energy carriers: ethanol and electricity. • A reference sugarcane industrial was simulated by the Monte Carlo method. • Simultaneously optimal ethanol production and electricity generation occur at low burning bagasse rates. - Abstract: The main objective of this work is to assess the impacts of integrating new biomass conversion technologies into an existing sugarcane industrial processing plant in terms of its multi-objective optimal operating conditions. A typical sugarcane mill is identified and a second generation ethanol production pathway is incorporated to give the operator the possibility of controlling the ratio between the rates of burning bagasse and straw (sugarcane tops and leaves) to their second generation processing to achieve optimal ethanol and electricity outputs. A set of equations describing the associated conversion unit operations and chemical reactions is simulated by the Monte Carlo method and the corresponding operating envelope is constructed and statistically analyzed. These equations permit to calculate ethanol production and electricity generation in terms of a virtually infinite number of scenarios characterized by two controlled variables (burning bagasse and straw mass flow rates) and several uncontrolled variables (biomass composition, cellulose, hemicelluloses and lignin yields, fermentation efficiencies, etc.). Results reveal that the input variables have specific statistical characteristics when the corresponding operating states lay near the maximum energy limit (Pareto frontier). For example, since the objectives being optimized are intrinsically antagonistic, i.e. the increase of one dictates the decrease of the other, it is better to convert bagasse to ethanol via second generation pathway because of the high energy requirements of its dewatering prior to combustion and low heat
WINHAC - the Monte Carlo event generator for single W-boson production in hadronic collisions
International Nuclear Information System (INIS)
Placzek, W.; Jadach, P.
2009-01-01
The charged-current Drell-Yan process, i.e. single W-boson production with leptonic decays in hadronic collisions, will play an important role in the experimental programme at the LHC. It will be used for improved measurements of some Standard Model parameters (such as the W-boson mass and widths, etc.), for better determination of the Higgs-boson mass limits, in '' new physics '' searches, as a '' standard candle '' process, etc. In order to achieve all these goals, precise theoretical predictions for this process in terms of a Monte Carlo event generator are indispensable. In this talk the Monte Carlo event generator WINHAC for the charged-current Drell-Yan process will be presented. It features higher-order QED corrections within the exclusive Yennie-Frautschi-Suura exponentiation scheme with the 1 st order electroweak corrections. It is interfaced with PYTHIA for QCD/QED initial-state parton shower as well as hadronization. It includes options for proton-proton, proton-antiproton and nucleus-nucleus collisions. Moreover, it allows for longitudinally and transversely polarized W-boson production. It has been cross-checked numerically to high precision against independent programs/calculations. Some numerical results from WINHAC will also be presented. Finally, interplay between QCD and electroweak effects will briefly be discussed. (author)
Wood gasification energy micro-generation system in Brazil- a Monte Carlo viability simulation
Directory of Open Access Journals (Sweden)
GLAUCIA APARECIDA PRATES
2018-03-01
Full Text Available The penetration of renewable energy into the electricity supply in Brazil is high, one of the highest in the World. Centralized hydroelectric generation is the main source of energy, followed by biomass and wind. Surprisingly, mini and micro-generation are negligible, with less than 2,000 connections to the national grid. In 2015, a new regulatory framework was put in place to change this situation. In the agricultural sector, the framework was complemented by the offer of low interest rate loans to in-farm renewable generation. Brazil proposed to more than double its area of planted forests as part of its INDC- Intended Nationally Determined Contributions to the UNFCCC-U.N. Framework Convention on Climate Change (UNFCCC. This is an ambitious target which will be achieved only if forests are attractive to farmers. Therefore, this paper analyses whether planting forests for in-farm energy generation with a with a woodchip gasifier is economically viable for microgeneration under the new framework and at if they could be an economic driver for forest plantation. At first, a static case was analyzed with data from Eucalyptus plantations in five farms. Then, a broader analysis developed with the use of Monte Carlo technique. Planting short rotation forests to generate energy could be a viable alternative and the low interest loans contribute to that. There are some barriers to such systems such as the inexistence of a mature market for small scale equipment and of a reference network of good practices and examples.
Energy Technology Data Exchange (ETDEWEB)
Tekiner, Hatice [Industrial Engineering, College of Engineering and Natural Sciences, Istanbul Sehir University, 2 Ahmet Bayman Rd, Istanbul (Turkey); Coit, David W. [Department of Industrial and Systems Engineering, Rutgers University, 96 Frelinghuysen Rd., Piscataway, NJ (United States); Felder, Frank A. [Edward J. Bloustein School of Planning and Public Policy, Rutgers University, Piscataway, NJ (United States)
2010-12-15
A new approach to the electricity generation expansion problem is proposed to minimize simultaneously multiple objectives, such as cost and air emissions, including CO{sub 2} and NO{sub x}, over a long term planning horizon. In this problem, system expansion decisions are made to select the type of power generation, such as coal, nuclear, wind, etc., where the new generation asset should be located, and at which time period expansion should take place. We are able to find a Pareto front for the multi-objective generation expansion planning problem that explicitly considers availability of the system components over the planning horizon and operational dispatching decisions. Monte-Carlo simulation is used to generate numerous scenarios based on the component availabilities and anticipated demand for energy. The problem is then formulated as a mixed integer linear program, and optimal solutions are found based on the simulated scenarios with a combined objective function considering the multiple problem objectives. The different objectives are combined using dimensionless weights and a Pareto front can be determined by varying these weights. The mathematical model is demonstrated on an example problem with interesting results indicating how expansion decisions vary depending on whether minimizing cost or minimizing greenhouse gas emissions or pollutants is given higher priority. (author)
International Nuclear Information System (INIS)
Chakraborty, Brahmananda
2009-01-01
Random number plays an important role in any Monte Carlo simulation. The accuracy of the results depends on the quality of the sequence of random numbers employed in the simulation. These include randomness of the random numbers, uniformity of their distribution, absence of correlation and long period. In a typical Monte Carlo simulation of particle transport in a nuclear reactor core, the history of a particle from its birth in a fission event until its death by an absorption or leakage event is tracked. The geometry of the core and the surrounding materials are exactly modeled in the simulation. To track a neutron history one needs random numbers for determining inter collision distance, nature of the collision, the direction of the scattered neutron etc. Neutrons are tracked in batches. In one batch approximately 2000-5000 neutrons are tracked. The statistical accuracy of the results of the simulation depends on the total number of particles (number of particles in one batch multiplied by the number of batches) tracked. The number of histories to be generated is usually large for a typical radiation transport problem. To track a very large number of histories one needs to generate a long sequence of independent random numbers. In other words the cycle length of the random number generator (RNG) should be more than the total number of random numbers required for simulating the given transport problem. The number of bits of the machine generally limits the cycle length. For a binary machine of p bits the maximum cycle length is 2 p . To achieve higher cycle length in the same machine one has to use either register arithmetic or bit manipulation technique
MONTE CARLO SIMULATION MODEL OF ENERGETIC PROTON TRANSPORT THROUGH SELF-GENERATED ALFVEN WAVES
Energy Technology Data Exchange (ETDEWEB)
Afanasiev, A.; Vainio, R., E-mail: alexandr.afanasiev@helsinki.fi [Department of Physics, University of Helsinki (Finland)
2013-08-15
A new Monte Carlo simulation model for the transport of energetic protons through self-generated Alfven waves is presented. The key point of the model is that, unlike the previous ones, it employs the full form (i.e., includes the dependence on the pitch-angle cosine) of the resonance condition governing the scattering of particles off Alfven waves-the process that approximates the wave-particle interactions in the framework of quasilinear theory. This allows us to model the wave-particle interactions in weak turbulence more adequately, in particular, to implement anisotropic particle scattering instead of isotropic scattering, which the previous Monte Carlo models were based on. The developed model is applied to study the transport of flare-accelerated protons in an open magnetic flux tube. Simulation results for the transport of monoenergetic protons through the spectrum of Alfven waves reveal that the anisotropic scattering leads to spatially more distributed wave growth than isotropic scattering. This result can have important implications for diffusive shock acceleration, e.g., affect the scattering mean free path of the accelerated particles in and the size of the foreshock region.
On the use of SERPENT Monte Carlo code to generate few group diffusion constants
Energy Technology Data Exchange (ETDEWEB)
Piovezan, Pamela, E-mail: pamela.piovezan@ctmsp.mar.mil.b [Centro Tecnologico da Marinha em Sao Paulo (CTMSP), Sao Paulo, SP (Brazil); Carluccio, Thiago; Domingos, Douglas Borges; Rossi, Pedro Russo; Mura, Luiz Felipe, E-mail: fermium@cietec.org.b, E-mail: thiagoc@ipen.b [Fermium Tecnologia Nuclear, Sao Paulo, SP (Brazil); Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-07-01
The accuracy of diffusion reactor codes strongly depends on the quality of the groups constants processing. For many years, the generation of such constants was based on 1-D infinity cell transport calculations. Some developments using collision probability or the method of characteristics allow, nowadays, 2-D assembly group constants calculations. However, these 1-D and 2-D codes how some limitations as , for example, on complex geometries and in the neighborhood of heavy absorbers. On the other hand, since Monte Carlos (MC) codes provide accurate neutro flux distributions, the possibility of using these solutions to provide group constants to full-core reactor diffusion simulators has been recently investigated, especially for the cases in which the geometry and reactor types are beyond the capability of the conventional deterministic lattice codes. The two greatest difficulties on the use of MC codes to group constant generation are the computational costs and the methodological incompatibility between analog MC particle transport simulation and deterministic transport methods based in several approximations. The SERPENT code is a 3-D continuous energy MC transport code with built-in burnup capability that was specially optimized to generate these group constants. In this work, we present the preliminary results of using the SERPENT MC code to generate 3-D two-group diffusion constants for a PWR like assembly. These constants were used in the CITATION diffusion code to investigate the effects of the MC group constants determination on the neutron multiplication factor diffusion estimate. (author)
Generation reliability assessment in oligopoly power market using Monte Carlo simulation
International Nuclear Information System (INIS)
Haroonabadi, H.; Haghifam, M.R.
2007-01-01
This paper addressed issues regarding power generation reliability assessment (HLI) in deregulated power pool markets. Most HLI reliability evaluation methods are based on the loss of load (LOLE) approach which is among the most suitable indices to describe the level of generation reliability. LOLE refers to the time in which load is greater than the amount of available generation. While most reliability assessments deal only with power system constraints, this study considered HLI reliability assessment in an oligopoly power market using Monte Carlo simulation (MCS). It evaluated the sensitivity of the reliability index to different reserve margins and future margins. The reliability index was determined by intersecting the offer and demand curves of power plants and comparing them to other parameters. The paper described the fundamentals of an oligopoly power pool market and proposed an algorithm for HLI reliability assessment for such a market. The proposed method was assessed on the IEEE-Reliability Test System with satisfactory results. In all cases, generation reliability indices were evaluated with different reserve margins and various load levels. 19 refs., 7 figs., 1 appendix
New-generation Monte Carlo shell model for the K computer era
International Nuclear Information System (INIS)
Shimizu, Noritaka; Abe, Takashi; Yoshida, Tooru; Otsuka, Takaharu; Tsunoda, Yusuke; Utsuno, Yutaka; Mizusaki, Takahiro; Honma, Michio
2012-01-01
We present a newly enhanced version of the Monte Carlo shell-model (MCSM) method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new-generation framework of the MCSM provides us with a powerful tool to perform very advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using conventional shell-model calculations with an inert 40 Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken. (author)
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
International Nuclear Information System (INIS)
Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.
1991-01-01
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs
Improv Chat: Second Response Generation for Chatbot
Wei, Furu
2018-01-01
Existing research on response generation for chatbot focuses on \\textbf{First Response Generation} which aims to teach the chatbot to say the first response (e.g. a sentence) appropriate to the conversation context (e.g. the user's query). In this paper, we introduce a new task \\textbf{Second Response Generation}, termed as Improv chat, which aims to teach the chatbot to say the second response after saying the first response with respect the conversation context, so as to lighten the burden ...
MICAP, Ionization Chamber Detector Response by Monte-Carlo
International Nuclear Information System (INIS)
2002-01-01
1 - Description of program or function: MICAP has been developed to determine the response of a gas-filled cavity ionization chamber or other detector type (plastic scintillator, calorimeter) in a mixed neutron and photon radiation environment. In particular, MICAP determines the neutron, photon, and total response of the detector system. The applicability of MICAP encompasses all aspects of mixed field dosimetry analysis including detector design, pre-experimental planning and post-experimental analysis. MICAP is a modular code system developed to be general with respect to problem applicability The transport modules utilize combinatorial geometry to accurately model the source/detector geometry and also use continuous energy and angle cross section and material data to represent the materials for a particular problem. 2 - Method of solution: The calculational scheme used in MICAP follows individual radiation particles incident on the detector wall material. The incident neutrons produce photons and heavy charged particles, and both primary and secondary photons produce electrons and positrons. As these charged particles enter or are produced in the detector material, they lose energy and produce ion pairs until their energy is completely dissipated or until they escape the detector. Ion recombination effects are included along the path of each charged particle rather than applied as an integral correction to the final result. The neutron response is determined from the energy deposition resulting from the transport of the charged particles and recoil heavy ions produced via the neutron interactions with the detector materials. The photon response is determined from the transport of both the primary photon radiation incident on the detector and also the secondary photons produced via the neutron interactions. MICAP not only yields the energy deposition by particle type and total energy deposited, but also the particular type of reaction, i.e. elastic scattering
International Nuclear Information System (INIS)
Cevallos R, L. E.; Guzman G, K. A.; Gallego, E.; Garcia F, G.; Vega C, H. R.
2017-10-01
The detection of hidden explosive material is very important for national security. Using Monte Carlo methods, with the code MCNP6, several proposed configurations of a detection system with a Deuterium-Deuterium (D-D) generator, in conjunction with NaI (Tl) scintillation detectors, have been evaluated to intercept hidden explosives. The response of the system to various explosive samples such as Rdx and ammonium nitrate are analyzed as the main components of home-military explosives. The D-D generator produces fast neutrons of 2.5 MeV in a maximum field of 10 10 n/s (Dd-110) which is surrounded with high density polyethylene in order to thermalized the fast neutrons making them interact with the sample inspected, giving rise to the emission of gamma rays that generates a characteristic spectrum of the elements that constitute it, being able in this way to determine its chemical composition and identify the type of substance. The necessary shielding is evaluated to estimate the admissible operation dose, with thicknesses of lead and borated polyethylene, in order to place it at some point of the Laboratory of Neutron Measurements of the Polytechnic University of Madrid where the shielding is optimal. The results show that its functionality is promising in the field of national security for the explosives inspection. (Author)
On the use of the Serpent Monte Carlo code for few-group cross section generation
International Nuclear Information System (INIS)
Fridman, E.; Leppaenen, J.
2011-01-01
Research highlights: → B1 methodology was used for generation of leakage-corrected few-group cross sections in the Serpent Monte-Carlo code. → Few-group constants generated by Serpent were compared with those calculated by Helios deterministic lattice transport code. → 3D analysis of a PWR core was performed by a nodal diffusion code DYN3D employing two-group cross section sets generated by Serpent and Helios. → An excellent agreement in the results of 3D core calculations obtained with Helios and Serpent generated cross-section libraries was observed. - Abstract: Serpent is a recently developed 3D continuous-energy Monte Carlo (MC) reactor physics burnup calculation code. Serpent is specifically designed for lattice physics applications including generation of homogenized few-group constants for full-core core simulators. Currently in Serpent, the few-group constants are obtained from the infinite-lattice calculations with zero neutron current at the outer boundary. In this study, in order to account for the non-physical infinite-lattice approximation, B1 methodology, routinely used by deterministic lattice transport codes, was considered for generation of leakage-corrected few-group cross sections in the Serpent code. A preliminary assessment of the applicability of the B1 methodology for generation of few-group constants in the Serpent code was carried out according to the following steps. Initially, the two-group constants generated by Serpent were compared with those calculated by Helios deterministic lattice transport code. Then, a 3D analysis of a Pressurized Water Reactor (PWR) core was performed by the nodal diffusion code DYN3D employing two-group cross section sets generated by Serpent and Helios. At this stage thermal-hydraulic (T-H) feedback was neglected. The DYN3D results were compared with those obtained from the 3D full core Serpent MC calculations. Finally, the full core DYN3D calculations were repeated taking into account T-H feedback and
Report on the work of the 'Monte Carlo Event Generation' subgroup
International Nuclear Information System (INIS)
Abe, K.
1981-01-01
The work of the Monte Carlo Event Generation includes the preparation of programs, jet simulation, track generation in chambers, and the pattern recognition of tracks and track fitting. Some general results from the jet simulation by Ali et al. are given. The total energy used was 60 GeV, and the top quark mass was assumed to be 25 GeV. The multiplicity of charged particles and photons is shown. The multiplicity increased with the number of jets. The energy spectra and the trajectories of charged particles and photons were obtained. The distribution of the opening angle of any two photons is also presented. The track generation program used is GEANT from CERN. This program was adapted to the KEK computer. Pattern recognition and track fitting are based on the tracking device. The program considered was that by DELCO group at SLAC. The tracking device consists of a MWPC and a cylindrical drift chamber with wires along the beam direction Z and wires inclined at a stereo angle. Some comments on vertex detectors are given. (Kato, T.)
Estimation of ex-core detector responses by adjoint Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)
2006-07-01
Ex-core detector responses can be efficiently calculated by combining an adjoint Monte Carlo calculation with the converged source distribution of a forward Monte Carlo calculation. As the fission source distribution from a Monte Carlo calculation is given only as a collection of discrete space positions, the coupling requires a point flux estimator for each collision in the adjoint calculation. To avoid the infinite variance problems of the point flux estimator, a next-event finite-variance point flux estimator has been applied, witch is an energy dependent form for heterogeneous media of a finite-variance estimator known from the literature. To test the effects of this combined adjoint-forward calculation a simple geometry of a homogeneous core with a reflector was adopted with a small detector in the reflector. To demonstrate the potential of the method the continuous-energy adjoint Monte Carlo technique with anisotropic scattering was implemented with energy dependent absorption and fission cross sections and constant scattering cross section. A gain in efficiency over a completely forward calculation of the detector response was obtained, which is strongly dependent on the specific system and especially the size and position of the ex-core detector and the energy range considered. Further improvements are possible. The method works without problems for small detectors, even for a point detector and a small or even zero energy range. (authors)
International Nuclear Information System (INIS)
Jin, Y.; Verghese, K.; Gardner, R.P.
1986-01-01
This paper describes a major part of our efforts to simulate the entire spectral response of the neutron capture prompt gamma-ray analyzer for bulk media (or conveyor belt) samples by the Monte Carlo method. This would allow one to use such a model to augment or, in most cases, essentially replace experiments in the calibration and optimum design of these analyzers. In previous work, we simulated the unscattered gamma-ray intensities, but would like to simulate the entire spectral response as we did with the energy-dispersive x-ray fluorescence analyzers. To accomplish this, one must account for the scattered gamma rays as well as the unscattered and one must have available the detector response function to translate the incident gamma-ray spectrum calculated by the Monte Carlo simulation into the detected pulse-height spectrum. We recently completed our work on the germanium detector response function, and the present paper describes our efforts to simulate the entire spectral response by using it with Monte Carlo predicted unscattered and scattered gamma rays
Response matrix of regular moderator volumes with 3He detector using Monte Carlo methods
International Nuclear Information System (INIS)
Baltazar R, A.; Vega C, H. R.; Ortiz R, J. M.; Solis S, L. O.; Castaneda M, R.; Soto B, T. G.; Medina C, D.
2017-10-01
In the last three decades the uses of Monte Carlo methods, for the estimation of physical phenomena associated with the interaction of radiation with matter, have increased considerably. The reason is due to the increase in computing capabilities and the reduction of computer prices. Monte Carlo methods allow modeling and simulating real systems before their construction, saving time and costs. The interaction mechanisms between neutrons and matter are diverse and range from elastic dispersion to nuclear fission; to facilitate the neutrons detection, is necessary to moderate them until reaching electronic equilibrium with the medium at standard conditions of pressure and temperature, in this state the total cross section of the 3 He is large. The objective of the present work was to estimate the response matrix of a proportional detector of 3 He using regular volumes of moderator through Monte Carlo methods. Neutron monoenergetic sources with energies of 10 -9 to 20 MeV and polyethylene moderators of different sizes were used. The calculations were made with the MCNP5 code; the number of stories for each detector-moderator combination was large enough to obtain errors less than 1.5%. We found that for small moderators the highest response is obtained for lower energy neutrons, when increasing the moderator dimension we observe that the response decreases for neutrons of lower energy and increases for higher energy neutrons. The total sum of the responses of each moderator allows obtaining a response close to a constant function. (Author)
International Nuclear Information System (INIS)
Tagziria, H.; Tanner, R.J.; Bartlett, D.T.; Thomas, D.J.
2004-01-01
All available measured data for the response characteristics of the Leake counter have been gathered together. These data, augmented by previously unpublished work, have been compared to Monte Carlo simulations of the instrument's response characteristics in the energy range from thermal to 20 MeV. A response function has been derived, which is recommended as the best currently available for the instrument. Folding this function with workplace energy distributions has enabled an assessment of the impact of this new response function to be made. Similar work, which will be published separately, has been carried out for the NM2 and the Studsvik 2202D neutron area survey instruments
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Energy Technology Data Exchange (ETDEWEB)
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Monte Carlo simulation of a medical linear accelerator for generation of phase spaces
International Nuclear Information System (INIS)
Oliveira, Alex C.H.; Santana, Marcelo G.; Lima, Fernando R.A.; Vieira, Jose W.
2013-01-01
Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation are linear accelerators (Linacs) which produce beams of X-rays in the range 5-30 MeV. Among the many algorithms developed over recent years for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC methods allow simulating the transport of ionizing radiation in complex configurations, such as detectors, Linacs, phantoms, etc. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. og millions of particles (photos, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). The objective of this work is to create a computational model of a 6 MeV Linac using the MC code Geant4 for generation of phase spaces. From the phase space, information was obtained to asses beam quality (photon and electron spectra and two-dimensional distribution of energy) and analyze the physical processes involved in producing the beam. (author)
An object-oriented framework for the hadronic Monte-Carlo event generators
International Nuclear Information System (INIS)
Amelin, N.; Komogorov, M.
1999-01-01
We advocate the development of an object-oriented framework for the hadronic Monte-Carlo (MC) event generators. The hadronic MC user and developer requirements are discussed as well as the hadronic model commonalities. It is argued that the development of a framework is in favour of taking into account of model commonalities since common means are stable and can be developed only at once. Such framework can provide different possibilities to have user session more convenient and productive, e.g., an easy access and edition of any model parameter, substitution of the model components by the alternative model components without changing the code, customized output, which offers either full information about history of generated event or specific information about reaction final state, etc. Such framework can indeed increase the productivity of a hadronic model developer, particularly, due to the formalization of the hadronic model component structure and model component collaborations. The framework based on the component approach opens a way to organize a library of the hadronic model components, which can be considered as the pool of hadronic model building blocks. Basic features, code structure and working examples of the first framework version for the hadronic MC models, which has been built as the starting point, are shortly explained
International Nuclear Information System (INIS)
Cornejo Diaz, N.; Vergara Gil, A.; Jurado Vargas, M.
2010-01-01
The Monte Carlo method has become a valuable numerical laboratory framework in which to simulate complex physical systems. It is based on the generation of pseudo-random number sequences by numerical algorithms called random generators. In this work we assessed the suitability of different well-known random number generators for the simulation of gamma-ray spectrometry systems during efficiency calibrations. The assessment was carried out in two stages. The generators considered (Delphi's linear congruential, mersenne twister, XorShift, multiplier with carry, universal virtual array, and non-periodic logistic map based generator) were first evaluated with different statistical empirical tests, including moments, correlations, uniformity, independence of terms and the DIEHARD battery of tests. In a second step, an application-specific test was conducted by implementing the generators in our Monte Carlo program DETEFF and comparing the results obtained with them. The calculations were performed with two different CPUs, for a typical HpGe detector and a water sample in Marinelli geometry, with gamma-rays between 59 and 1800 keV. For the Non-periodic Logistic Map based generator, dependence of the most significant bits was evident. This explains the bias, in excess of 5%, of the efficiency values obtained with this generator. The results of the application-specific assessment and the statistical performance of the other algorithms studied indicate their suitability for the Monte Carlo simulation of gamma-ray spectrometry systems for efficiency calculations.
Díaz, N Cornejo; Gil, A Vergara; Vargas, M Jurado
2010-03-01
The Monte Carlo method has become a valuable numerical laboratory framework in which to simulate complex physical systems. It is based on the generation of pseudo-random number sequences by numerical algorithms called random generators. In this work we assessed the suitability of different well-known random number generators for the simulation of gamma-ray spectrometry systems during efficiency calibrations. The assessment was carried out in two stages. The generators considered (Delphi's linear congruential, mersenne twister, XorShift, multiplier with carry, universal virtual array, and non-periodic logistic map based generator) were first evaluated with different statistical empirical tests, including moments, correlations, uniformity, independence of terms and the DIEHARD battery of tests. In a second step, an application-specific test was conducted by implementing the generators in our Monte Carlo program DETEFF and comparing the results obtained with them. The calculations were performed with two different CPUs, for a typical HpGe detector and a water sample in Marinelli geometry, with gamma-rays between 59 and 1800 keV. For the Non-periodic Logistic Map based generator, dependence of the most significant bits was evident. This explains the bias, in excess of 5%, of the efficiency values obtained with this generator. The results of the application-specific assessment and the statistical performance of the other algorithms studied indicate their suitability for the Monte Carlo simulation of gamma-ray spectrometry systems for efficiency calculations. Copyright 2009 Elsevier Ltd. All rights reserved.
Bergaoui, K; Reguigui, N; Gary, C K; Brown, C; Cremer, J T; Vainionpaa, J H; Piestrup, M A
2014-12-01
An explosive detection system based on a Deuterium-Deuterium (D-D) neutron generator has been simulated using the Monte Carlo N-Particle Transport Code (MCNP5). Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma emission (10.82MeV) following radiative neutron capture by (14)N nuclei. The explosive detection system was built based on a fully high-voltage-shielded, axial D-D neutron generator with a radio frequency (RF) driven ion source and nominal yield of about 10(10) fast neutrons per second (E=2.5MeV). Polyethylene and paraffin were used as moderators with borated polyethylene and lead as neutron and gamma ray shielding, respectively. The shape and the thickness of the moderators and shields are optimized to produce the highest thermal neutron flux at the position of the explosive and the minimum total dose at the outer surfaces of the explosive detection system walls. In addition, simulation of the response functions of NaI, BGO, and LaBr3-based γ-ray detectors to different explosives is described. Copyright © 2014 Elsevier Ltd. All rights reserved.
MONTE CARLO CALCULATION OF THE ENERGY RESPONSE OF THE NARF HURST-TYPE FAST- NEUTRON DOSIMETER
Energy Technology Data Exchange (ETDEWEB)
De Vries, T. W.
1963-06-15
The response function for the fast-neutron dosimeter was calculated by the Monte Carlo technique (Code K-52) and compared with a calculation based on the Bragg-Gray principle. The energy deposition spectra so obtained show that the response spectra become softer with increased incident neutron energy ahove 3 Mev. The K-52 calculated total res nu onse is more nearly constant with energy than the BraggGray response. The former increases 70 percent from 1 Mev to 14 Mev while the latter increases 135 percent over this energy range. (auth)
International Nuclear Information System (INIS)
Bacchetta, Alessandro; Jung, Hannes; Kutak, Krzysztof
2010-02-01
A method for tuning parameters in Monte Carlo generators is described and applied to a specific case. The method works in the following way: each observable is generated several times using different values of the parameters to be tuned. The output is then approximated by some analytic form to describe the dependence of the observables on the parameters. This approximation is used to find the values of the parameter that give the best description of the experimental data. This results in significantly faster fitting compared to an approach in which the generator is called iteratively. As an application, we employ this method to fit the parameters of the unintegrated gluon density used in the Cascade Monte Carlo generator, using inclusive deep inelastic data measured by the H1 Collaboration. We discuss the results of the fit, its limitations, and its strong points. (orig.)
International Nuclear Information System (INIS)
Berdnikov, Ya.A.; Berdnikov, A.Ya.; Kim, V.T.; Ivanov, A.E.; Suetin, D.P.; Tiangov, K.D.
2016-01-01
Hadron production in neutrino-nucleus interactions is implemented in Monte Carlo event generator HARDPING (HARD Probe INteraction Generator). Such effects as formation length, energy loss and multiple rescattering for produced hadrons and their constituents are taken into account in HARDPING. Available data from WA/59 and SCAT collaborations on hadron production in neutrino-nucleus collisions is described by HARDPING with a reasonable agreement
Sunil, C.; Tyagi, Mohit; Biju, K.; Shanbhag, A. A.; Bandyopadhyay, T.
2015-12-01
The scarcity and the high cost of 3He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am-Be neutron source shows promise of being used as rem counter.
Energy Technology Data Exchange (ETDEWEB)
Sunil, C., E-mail: csunil11@gmail.com [Accelerator Radiation Safety Section, Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Tyagi, Mohit [Technical Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Biju, K.; Shanbhag, A.A.; Bandyopadhyay, T. [Accelerator Radiation Safety Section, Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)
2015-12-11
The scarcity and the high cost of {sup 3}He has spurred the use of various detectors for neutron monitoring. A new lithium yttrium borate scintillator developed in BARC has been studied for its use in a neutron rem counter. The scintillator is made of natural lithium and boron, and the yield of reaction products that will generate a signal in a real time detector has been studied by FLUKA Monte Carlo radiation transport code. A 2 cm lead introduced to enhance the gamma rejection shows no appreciable change in the shape of the fluence response or in the yield of reaction products. The fluence response when normalized at the average energy of an Am–Be neutron source shows promise of being used as rem counter.
A technique for generating phase-space-based Monte Carlo beamlets in radiotherapy applications
International Nuclear Information System (INIS)
Bush, K; Popescu, I A; Zavgorodni, S
2008-01-01
As radiotherapy treatment planning moves toward Monte Carlo (MC) based dose calculation methods, the MC beamlet is becoming an increasingly common optimization entity. At present, methods used to produce MC beamlets have utilized a particle source model (PSM) approach. In this work we outline the implementation of a phase-space-based approach to MC beamlet generation that is expected to provide greater accuracy in beamlet dose distributions. In this approach a standard BEAMnrc phase space is sorted and divided into beamlets with particles labeled using the inheritable particle history variable. This is achieved with the use of an efficient sorting algorithm, capable of sorting a phase space of any size into the required number of beamlets in only two passes. Sorting a phase space of five million particles can be achieved in less than 8 s on a single-core 2.2 GHz CPU. The beamlets can then be transported separately into a patient CT dataset, producing separate dose distributions (doselets). Methods for doselet normalization and conversion of dose to absolute units of Gy for use in intensity modulated radiation therapy (IMRT) plan optimization are also described. (note)
Application of Macro Response Monte Carlo method for electron spectrum simulation
International Nuclear Information System (INIS)
Perles, L.A.; Almeida, A. de
2007-01-01
During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations
Automating methods to improve precision in Monte-Carlo event generation for particle colliders
International Nuclear Information System (INIS)
Gleisberg, Tanju
2008-01-01
The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove
Automating methods to improve precision in Monte-Carlo event generation for particle colliders
Energy Technology Data Exchange (ETDEWEB)
Gleisberg, Tanju
2008-07-01
The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove
New approach to parton shower Monte Carlo event generators for precision QCD theory: HERWIRI1.0(31)
International Nuclear Information System (INIS)
Joseph, S.; Ward, B. F. L.; Majhi, S.; Yost, S. A.
2010-01-01
By implementing the new IR-improved Dokshitzer-Gribov-Lipatov-Altarelli-Parisi-Callan-Symanzik (DGLAP-CS) kernels recently developed by one of us in the HERWIG6.5 environment we generate a new Monte Carlo (MC), HERWIRI1.0(31), for hadron-hadron scattering at high energies. We use MC data to illustrate the comparison between the parton shower generated by the standard DGLAP-CS kernels and that generated by the new IR-improved DGLAP-CS kernels. The interface to MC-NLO, MC-NLO/HERWIRI, is illustrated. Comparisons with FNAL data and some discussion of possible implications for LHC phenomenology are also presented.
Responsibility and Generativity in Online Learning Communities
Beth, Alicia D.; Jordan, Michelle E.; Schallert, Diane L.; Reed, JoyLynn H.; Kim, Minseong
2015-01-01
The purpose of this study was to investigate whether and how students enact "responsibility" and "generativity" through their comments in asynchronous online discussions. "Responsibility" referred to discourse markers indicating participants' sense that their contributions are required in order to uphold their…
Othman, M A R; Cutajar, D L; Hardcastle, N; Guatelli, S; Rosenfeld, A B
2010-09-01
Monte Carlo simulations of the energy response of a conventionally packaged single metal-oxide field effect transistors (MOSFET) detector were performed with the goal of improving MOSFET energy dependence for personal accident or military dosimetry. The MOSFET detector packaging was optimised. Two different 'drop-in' design packages for a single MOSFET detector were modelled and optimised using the GEANT4 Monte Carlo toolkit. Absorbed photon dose simulations of the MOSFET dosemeter placed in free-air response, corresponding to the absorbed doses at depths of 0.07 mm (D(w)(0.07)) and 10 mm (D(w)(10)) in a water equivalent phantom of size 30 x 30 x 30 cm(3) for photon energies of 0.015-2 MeV were performed. Energy dependence was reduced to within + or - 60 % for photon energies 0.06-2 MeV for both D(w)(0.07) and D(w)(10). Variations in the response for photon energies of 15-60 keV were 200 and 330 % for D(w)(0.07) and D(w)(10), respectively. The obtained energy dependence was reduced compared with that for conventionally packaged MOSFET detectors, which usually exhibit a 500-700 % over-response when used in free-air geometry.
Glavinovíc, M I
1999-02-01
The release of vesicular glutamate, spatiotemporal changes in glutamate concentration in the synaptic cleft and the subsequent generation of fast excitatory postsynaptic currents at a hippocampal synapse were modeled using the Monte Carlo method. It is assumed that glutamate is released from a spherical vesicle through a cylindrical fusion pore into the synaptic cleft and that S-alpha-amino-3-hydroxy -5-methyl-4-isoxazolepropionic acid (AMPA) receptors are uniformly distributed postsynaptically. The time course of change in vesicular concentration can be described by a single exponential, but a slow tail is also observed though only following the release of most of the glutamate. The time constant of decay increases with vesicular size and a lower diffusion constant, and is independent of the initial concentration, becoming markedly shorter for wider fusion pores. The cleft concentration at the fusion pore mouth is not negligible compared to vesicular concentration, especially for wider fusion pores. Lateral equilibration of glutamate is rapid, and within approximately 50 micros all AMPA receptors on average see the same concentration of glutamate. Nevertheless the single-channel current and the number of channels estimated from mean-variance plots are unreliable and different when estimated from rise- and decay-current segments. Greater saturation of AMPA receptor channels provides higher but not more accurate estimates. Two factors contribute to the variability of postsynaptic currents and render the mean-variance nonstationary analysis unreliable, even when all receptors see on average the same glutamate concentration. Firstly, the variability of the instantaneous cleft concentration of glutamate, unlike the mean concentration, first rapidly decreases before slowly increasing; the variability is greater for fewer molecules in the cleft and is spatially nonuniform. Secondly, the efficacy with which glutamate produces a response changes with time. Understanding
International Nuclear Information System (INIS)
Procassini, R J; Beck, B R
2004-01-01
It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results
Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.
Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P
2018-01-04
Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was
International Nuclear Information System (INIS)
Jia Wenbao; Chen Xiaowen; Xu Aiguo; Li Anmin
2010-01-01
Application of Monte Carlo method to build spectra library is useful to reduce experiment workload in Prompt Gamma Neutron Activation Analysis (PGNAA). The new Monte Carlo Code MOCA was used to simulate the response spectra of BGO detector for gamma rays from 137 Cs, 60 Co and neutron induced gamma rays from S and Ti. The results were compared with general code MCNP, show that the agreement of MOCA between simulation and experiment is better than MCNP. This research indicates that building spectra library by Monte Carlo method is feasible. (authors)
A Monte Carlo simulation of the microdosimetric response for thick gas electron multiplier
International Nuclear Information System (INIS)
Hanu, A.; Byun, S.H.; Prestwich, W.V.
2010-01-01
The neutron microdosimetric responses of the thick gas electron multiplier (THGEM) detector were simulated. The THGEM is a promising device for microdosimetry, particularly for measuring the dose spectra of intense radiation fields and for collecting two-dimensional microdosimetric distributions. To investigate the response of the prototype THGEM microdosimetric detector, a simulation was developed using the Geant4 Monte Carlo code. The simulation calculates the deposited energy in the detector sensitive volume for an incident neutron beam. Both neutron energy and angular responses were computed for various neutron beam conditions. The energy response was compared with the reported experimental microdosimetric spectra as well as the evaluated fluence-to-kerma conversion coefficients. The effects of using non-tissue equivalent materials were also investigated by comparing the THGEM detector response with the response of an ideal detector in identical neutron field conditions. The result of the angular response simulations revealed severe angular dependencies for neutron energies above 100 keV. The simulation of a modified detector design gave an angular response pattern close to the ideal case, showing a fluctuation of less than 10% over the entire angular range.
ISAJET: a Monte Carlo event generator for pp and anti pp interactions
International Nuclear Information System (INIS)
Paige, F.E.; Protopopescu, S.D.
1985-01-01
ISAJET is a Monte Carlo program which simulates pp and anti pp interactions at high energy. It is based on perturbative QCD plus phenomenological models for jet and beam jet fragmentation. This article describes ISAJET Version 5.00. 21 refs., 3 figs
ISAJET 5.30: A Monte Carlo event generator for pp and anti pp interactions
International Nuclear Information System (INIS)
Paige, F.E.; Protopopescu, S.D.
1986-09-01
ISAJET is a Monte Carlo program which simulates pp and anti pp interactions at high energy. It is based on perturbative QCD cross sections, leading order QCD radiative corrections for initial and final state partons, and phenomenological models for jet and beam jet fragmentation. This article describes ISAJET 5.30, which includes production of standard Higgs bosons and which will be released shortly
Generation of triangulated random surfaces by the Monte Carlo method in the grand canonical ensemble
International Nuclear Information System (INIS)
Zmushko, V.V.; Migdal, A.A.
1987-01-01
A model of triangulated random surfaces which is the discrete analog of the Polyakov string is considered. An algorithm is proposed which enables one to study the model by the Monte Carlo method in the grand canonical ensemble. Preliminary results on the determination of the critical index γ are presented
Fast Monte Carlo-simulator with full collimator and detector response modelling for SPECT
International Nuclear Information System (INIS)
Sohlberg, A.O.; Kajaste, M.T.
2012-01-01
Monte Carlo (MC)-simulations have proved to be a valuable tool in studying single photon emission computed tomography (SPECT)-reconstruction algorithms. Despite their popularity, the use of Monte Carlo-simulations is still often limited by their large computation demand. This is especially true in situations where full collimator and detector modelling with septal penetration, scatter and X-ray fluorescence needs to be included. This paper presents a rapid and simple MC-simulator, which can effectively reduce the computation times. The simulator was built on the convolution-based forced detection principle, which can markedly lower the number of simulated photons. Full collimator and detector response look-up tables are pre-simulated and then later used in the actual MC-simulations to model the system response. The developed simulator was validated by comparing it against 123 I point source measurements made with a clinical gamma camera system and against 99m Tc software phantom simulations made with the SIMIND MC-package. The results showed good agreement between the new simulator, measurements and the SIMIND-package. The new simulator provided near noise-free projection data in approximately 1.5 min per projection with 99m Tc, which was less than one-tenth of SIMIND's time. The developed MC-simulator can markedly decrease the simulation time without sacrificing image quality. (author)
Energy Technology Data Exchange (ETDEWEB)
Nordenfors, C
1999-02-01
To determine dose rate in a gamma radiation field, based on measurements with a semiconductor detector, it is necessary to know how the detector effects the field. This work aims to describe this effect with Monte Carlo simulations and calculations, that is to identify the detector response function. This is done for a germanium gamma detector. The detector is normally used in the in-situ measurements that is carried out regularly at the department. After the response function is determined it is used to reconstruct a spectrum from an in-situ measurement, a so called unfolding. This is done to be able to calculate fluence rate and dose rate directly from a measured (and unfolded) spectrum. The Monte Carlo code used in this work is EGS4 developed mainly at Stanford Linear Accelerator Center. It is a widely used code package to simulate particle transport. The results of this work indicates that the method could be used as-is since the accuracy of this method compares to other methods already in use to measure dose rate. Bearing in mind that this method provides the nuclide specific dose it is useful, in radiation protection, since knowing what the relations between different nuclides are and how they change is very important when estimating the risks
Optimization of the energy response of radiographic films by Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Moslehi, A. [Physics Department, Faculty of Science, Arak University, Shariati Square, Arak 38156 (Iran, Islamic Republic of); Hamidi, S., E-mail: s-hamidi@araku.ac.i [Physics Department, Faculty of Science, Arak University, Shariati Square, Arak 38156 (Iran, Islamic Republic of); Raisali, G. [Radiation Application Research School, Nuclear Science and Technology Research Institute, Atomic Energy Organization of Iran (Iran, Islamic Republic of); Gheshlaghi, F. [Film Badge Dosimetry Laboratory, National Radiation Protection Department, Iranian Nuclear Regulatory Authority, Atomic Energy Organization of Iran (Iran, Islamic Republic of)
2010-01-15
In the present work a simple model for calculation of the energy response of radiographic films was introduced. According to the model the energy response of a radiographic film is directly proportional to the optical density on the film and thus to the number of developed grains in the emulsion. The model was simulated by Monte Carlo method using MCNP code and the relative energy response of Kodak type 2 film under a few filters of A.E.R.E./R.P.S. film badge was calculated. The simulated responses were in agreement with the experimental data in the region of 30 keV-1.5 MeV. In the next stage a multi-element filter was simulated to optimize the energy response in the above energies. The energy response varied by 25% between 40 keV and 1.5 MeV. So the dose received by the film is equivalent to the desired true dose and there would be no need to the correction factors.
Optimization of the energy response of radiographic films by Monte Carlo method
International Nuclear Information System (INIS)
Moslehi, A.; Hamidi, S.; Raisali, G.; Gheshlaghi, F.
2010-01-01
In the present work a simple model for calculation of the energy response of radiographic films was introduced. According to the model the energy response of a radiographic film is directly proportional to the optical density on the film and thus to the number of developed grains in the emulsion. The model was simulated by Monte Carlo method using MCNP code and the relative energy response of Kodak type 2 film under a few filters of A.E.R.E./R.P.S. film badge was calculated. The simulated responses were in agreement with the experimental data in the region of 30 keV-1.5 MeV. In the next stage a multi-element filter was simulated to optimize the energy response in the above energies. The energy response varied by 25% between 40 keV and 1.5 MeV. So the dose received by the film is equivalent to the desired true dose and there would be no need to the correction factors.
ISAJET 5.02: a Monte Carlo event generator for pp and anti pp interactions
International Nuclear Information System (INIS)
Paige, F.E.; Protopopescu, S.D.
1985-01-01
ISAJET is a Monte Carlo program which simulates pp and anti p p interactions at high energy. It is based on perturbative QCD cross sections, leading order QCD radiative corrections for initial and final state partons, and phenomenological models for jet and beam jet fragmentation. This article describes ISAJET 5.02, which is identical with Version 5.00 except for minor corrections. 27 refs., 7 figs
Generation of gamma-ray streaming kernels through cylindrical ducts via Monte Carlo method
International Nuclear Information System (INIS)
Kim, Dong Su
1992-02-01
Since radiation streaming through penetrations is often the critical consideration in protection against exposure of personnel in a nuclear facility, it has been of great concern in radiation shielding design and analysis. Several methods have been developed and applied to the analysis of the radiation streaming in the past such as ray analysis method, single scattering method, albedo method, and Monte Carlo method. But they may be used for order-of-magnitude calculations and where sufficient margin is available, except for the Monte Carlo method which is accurate but requires a lot of computing time. This study developed a Monte Carlo method and constructed a data library of solutions using the Monte Carlo method for radiation streaming through a straight cylindrical duct in concrete walls of a broad, mono-directional, monoenergetic gamma-ray beam of unit intensity. The solution named as plane streaming kernel is the average dose rate at duct outlet and was evaluated for 20 source energies from 0 to 10 MeV, 36 source incident angles from 0 to 70 degrees, 5 duct radii from 10 to 30 cm, and 16 wall thicknesses from 0 to 100 cm. It was demonstrated that average dose rate due to an isotropic point source at arbitrary positions can be well approximated using the plane streaming kernel with acceptable error. Thus, the library of the plane streaming kernels can be used for the accurate and efficient analysis of radiation streaming through a straight cylindrical duct in concrete walls due to arbitrary distributions of gamma-ray sources
Determination of the spatial response of neutron based analysers using a Monte Carlo based method
International Nuclear Information System (INIS)
Tickner, James
2000-01-01
One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal
Linear and Non-Linear Dielectric Response of Periodic Systems from Quantum Monte Carlo
Umari, Paolo
2006-03-01
We present a novel approach that allows to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wavefunction, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence. The polarization is sampled through forward-walking. This approach has been validated for the case of the polarizability of an isolated hydrogen atom, and then applied to a periodic system. We then calculate the linear susceptibility and second-order hyper-susceptibility of molecular-hydrogen chains whith different bond-length alternations, and assess the quality of nodal surfaces derived from density-functional theory or from Hartree-Fock. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.P. Umari, A.J. Williamson, G. Galli, and N. MarzariPhys. Rev. Lett. 95, 207602 (2005).
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Alves Júnior, A. A.; Sokoloff, M. D.
2017-10-01
MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments
Monte Carlo simulation of the spectral response of beta-particle emitters in LSC systems
International Nuclear Information System (INIS)
Ortiz, F.; Los Arcos, J.M.; Grau, A.; Rodriguez, L.
1992-01-01
This paper presents a new method to evaluate the counting efficiency and the effective spectra at the output of any dynodic stage, for any pure beta-particle emitter, measured in a liquid scintillation counting system with two photomultipliers working in sum-coincidence mode. The process is carried out by a Monte Carlo simulation procedure that gives the electron distribution, and consequently the counting efficiency, at any dynode, in response to the beta particles emitted, as a function of the figure of merit of the system and the dynodic gains. The spectral outputs for 3 H and 14 C have been computed and compared with experimental data obtained with two sets of quenched radioactive standards of these nuclides. (orig.)
Measurement and Monte Carlo modeling of the spatial response of scintillation screens
Energy Technology Data Exchange (ETDEWEB)
Pistrui-Maximean, S.A. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: spistrui@gmail.com; Letang, J.M. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: jean-michel.letang@insa-lyon.fr; Freud, N. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Detectors and Electronics Department, FB Physik, Siegen University, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere (France); Babot, D. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)
2007-11-01
In this article, we propose a detailed protocol to carry out measurements of the spatial response of scintillation screens and to assess the agreement with simulated results. The experimental measurements have been carried out using a practical implementation of the slit method. A Monte Carlo simulation model of scintillator screens, implemented with the toolkit Geant4, has been used to study the influence of the acquisition setup parameters and to compare with the experimental results. An algorithm of global stochastic optimization based on a localized random search method has been implemented to adjust the optical parameters (optical scattering and absorption coefficients). The algorithm has been tested for different X-ray tube voltages (40, 70 and 100 kV). A satisfactory convergence between the results simulated with the optimized model and the experimental measurements is obtained.
Monte Carlo simulation of the response of a pixellated 3D photo-detector in silicon
Dubaric, E; Froejdh, C; Norlin, B
2002-01-01
The charge transport and X-ray photon absorption in three-dimensional (3D) X-ray pixel detectors have been studied using numerical simulations. The charge transport has been modelled using the drift-diffusion simulator MEDICI, while photon absorption has been studied using MCNP. The response of the entire pixel detector system in terms of charge sharing, line spread function and modulation transfer function, has been simulated using a system level Monte Carlo simulation approach. A major part of the study is devoted to the effect of charge sharing on the energy resolution in 3D-pixel detectors. The 3D configuration was found to suppress charge sharing much better than conventional planar detectors.
Energy Technology Data Exchange (ETDEWEB)
Remetti, Romolo; Lepore, Luigi [Sapienza University of Rome, Dept. SBAI, Via Antonio Scarpa 14, 00161 Rome (Italy); Cherubini, Nadia [ENEA CRE Casaccia, Nuclear Material Characterization Laboratory and Nuclear Waste Management, Via Anguillarese 301, 00123 Rome (Italy)
2017-01-11
An extensive use of Monte Carlo simulations led to the identification of a Thermo Scientific MP320 neutron generator MCNPX input deck. Such input deck is currently utilized at ENEA Casaccia Research Center for optimizing all the techniques and applications involving the device, in particular for explosives and drugs detection by fast neutrons. The working model of the generator was obtained thanks to a detailed representation of the MP320 internal components, and to the potentialities offered by the MCNPX code. Validation of the model was obtained by comparing simulated results vs. manufacturer's data, and vs. experimental tests. The aim of this work is explaining all the steps that led to those results, suggesting a procedure that might be extended to different models of neutron generators.
Remetti, Romolo; Lepore, Luigi; Cherubini, Nadia
2017-01-01
An extensive use of Monte Carlo simulations led to the identification of a Thermo Scientific MP320 neutron generator MCNPX input deck. Such input deck is currently utilized at ENEA Casaccia Research Center for optimizing all the techniques and applications involving the device, in particular for explosives and drugs detection by fast neutrons. The working model of the generator was obtained thanks to a detailed representation of the MP320 internal components, and to the potentialities offered by the MCNPX code. Validation of the model was obtained by comparing simulated results vs. manufacturer's data, and vs. experimental tests. The aim of this work is explaining all the steps that led to those results, suggesting a procedure that might be extended to different models of neutron generators.
Waller, Niels G
2016-01-01
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.
Energy Technology Data Exchange (ETDEWEB)
Türkmen, Mehmet, E-mail: tm@hacettepe.edu.tr [Nuclear Engineering Department, Hacettepe University, Beytepe Campus, Ankara (Turkey); Çolak, Üner [Energy Institute, Istanbul Technical University, Ayazağa Campus, Maslak, Istanbul (Turkey); Ergün, Şule [Nuclear Engineering Department, Hacettepe University, Beytepe Campus, Ankara (Turkey)
2015-12-15
Highlights: • Optimum core maps were generated for the ITU TRIGA Mark II Research Reactor. • Calculations were performed using a Monte Carlo based reactor physics code, MCNP. • Single-Objective and Multi-Objective Genetic Algorithms were used for the optimization. • k{sub eff} and ppf{sub max} were considered as the optimization objectives. • The generated core maps were compared with the fresh core map. - Abstract: The main purpose of this study is to present the results of Core Map (CM) generation calculations for the İstanbul Technical University TRIGA Mark II Research Reactor by using Genetic Algorithms (GA) coupled with a Monte Carlo (MC) based-particle transport code. Optimization problems under consideration are: (i) maximization of the core excess reactivity (ρ{sub ex}) using Single-Objective GA when the burned fuel elements with no fresh fuel elements are used, (ii) maximization of the ρ{sub ex} and minimization of maximum power peaking factor (ppf{sub max}) using Multi-Objective GA when the burned fuels with fresh fuels are used. The results were obtained when all the control rods are fully withdrawn. ρ{sub ex} and ppf{sub max} values of the produced best CMs were provided. Core-averaged neutron spectrum, and variation of neutron fluxes with respect to radial distance were presented for the best CMs. The results show that it is possible to find an optimum CM with an excess reactivity of 1.17 when the burned fuels are used. In the case of a mix of burned fuels and fresh fuels, the best pattern has an excess reactivity of 1.19 with a maximum peaking factor of 1.4843. In addition, when compared with the fresh CM, the thermal fluxes of the generated CMs decrease by about 2% while change in the fast fluxes is about 1%.Classification: J. Core physics.
International Nuclear Information System (INIS)
Parent, Laure; Seco, Joao; Evans, Phil M.; Fielding, Andrew; Dance, David R.
2006-01-01
This study focused on predicting the electronic portal imaging device (EPID) image of intensity modulated radiation treatment (IMRT) fields in the absence of attenuation material in the beam with Monte Carlo methods. As IMRT treatments consist of a series of segments of various sizes that are not always delivered on the central axis, large spectral variations may be observed between the segments. The effect of these spectral variations on the EPID response was studied with fields of various sizes and off-axis positions. A detailed description of the EPID was implemented in a Monte Carlo model. The EPID model was validated by comparing the EPID output factors for field sizes between 1x1 and 26x26 cm 2 at the isocenter. The Monte Carlo simulations agreed with the measurements to within 1.5%. The Monte Carlo model succeeded in predicting the EPID response at the center of the fields of various sizes and offsets to within 1% of the measurements. Large variations (up to 29%) of the EPID response were observed between the various offsets. The EPID response increased with field size and with field offset for most cases. The Monte Carlo model was then used to predict the image of a simple test IMRT field delivered on the beam axis and with an offset. A variation of EPID response up to 28% was found between the on- and off-axis delivery. Finally, two clinical IMRT fields were simulated and compared to the measurements. For all IMRT fields, simulations and measurements agreed within 3%--0.2 cm for 98% of the pixels. The spectral variations were quantified by extracting from the spectra at the center of the fields the total photon yield (Y total ), the photon yield below 1 MeV (Y low ), and the percentage of photons below 1 MeV (P low ). For the studied cases, a correlation was shown between the EPID response variation and Y total , Y low , and P low
Generating transverse response explicitly from harmonic oscillators
Yao, Yuan; Tang, Ying; Ao, Ping
2017-10-01
We obtain stochastic dynamics from a system-plus-bath mechanism as an extension of the Caldeira-Leggett (CL) model in the classical regime. An effective magnetic field and response functions with both longitudinal and transverse parts are exactly generated from the bath of harmonic oscillators. The effective magnetic field and transverse response are antisymmetric matrices: the former is explicitly time-independent corresponding to the geometric magnetism, while the latter can have memory. The present model can be reduced to previous representative examples of stochastic dynamics describing nonequilibrium processes. Our results demonstrate that a system coupled with a bath of harmonic oscillators is a general approach to studying stochastic dynamics, and provides a method to experimentally implement an effective magnetic field from coupling to the environment.
Energy Technology Data Exchange (ETDEWEB)
Carasco, C., E-mail: cedric.carasco@cea.fr [CEA, DEN, Cadarache, Nuclear Measurement Laboratory, F-13108 Saint-Paul-lez-Durance (France)
2012-07-15
In neutron Time-of-Flight (TOF) measurements performed with fast organic scintillation detectors, both pulse arrival time and amplitude are relevant. Monte Carlo simulation can be used to calculate the time-energy dependant neutron flux at the detector position. To convert the flux into a pulse height spectrum, one must calculate the detector response function for mono-energetic neutrons. MCNP can be used to design TOF systems, but standard MCNP versions cannot reliably calculate the energy deposited by fast neutrons in the detector since multiple scattering effects must be taken into account in an analog way, the individual recoil particles energy deposit being summed with the appropriate scintillation efficiency. In this paper, the energy response function of 2 Double-Prime Multiplication-Sign 2 Double-Prime and 5 Double-Prime Multiplication-Sign 5 Double-Prime liquid scintillation BC-501 A (Bicron) detectors to fast neutrons ranging from 20 keV to 5.0 MeV is computed with GEANT4 to be coupled with MCNPX through the 'MCNP Output Data Analysis' software developed under ROOT (). - Highlights: Black-Right-Pointing-Pointer GEANT4 has been used to model organic scintillators response to neutrons up to 5 MeV. Black-Right-Pointing-Pointer The response of 2 Double-Prime Multiplication-Sign 2 Double-Prime and 5 Double-Prime Multiplication-Sign 5 Double-Prime BC501A detectors has been parameterized with simple functions. Black-Right-Pointing-Pointer Parameterization will allow the modeling of neutron Time of Flight measurements with MCNP using tools based on CERN's ROOT.
Akushevich, I.; Filoti, O. F.; Ilyichev, A.; Shumeiko, N.
2012-07-01
The structure and algorithms of the Monte Carlo generator ELRADGEN 2.0 designed to simulate radiative events in polarized ep-scattering are presented. The full set of analytical expressions for the QED radiative corrections is presented and discussed in detail. Algorithmic improvements implemented to provide faster simulation of hard real photon events are described. Numerical tests show high quality of generation of photonic variables and radiatively corrected cross section. The comparison of the elastic radiative tail simulated within the kinematical conditions of the BLAST experiment at MIT BATES shows a good agreement with experimental data. Catalogue identifier: AELO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1299 No. of bytes in distributed program, including test data, etc.: 11 348 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: All Operating system: Any RAM: 1 MB Classification: 11.2, 11.4 Nature of problem: Simulation of radiative events in polarized ep-scattering. Solution method: Monte Carlo simulation according to the distributions of the real photon kinematic variables that are calculated by the covariant method of QED radiative correction estimation. The approach provides rather fast and accurate generation. Running time: The simulation of 108 radiative events for itest:=1 takes up to 52 seconds on Pentium(R) Dual-Core 2.00 GHz processor.
Energy Technology Data Exchange (ETDEWEB)
Luo, Wen, E-mail: wenluo-ok@163.com [School of Nuclear Science and Technology, University of South China, Hengyang 421001 (China); Lan, Hao-yang [School of Nuclear Science and Technology, University of South China, Hengyang 421001 (China); Xu, Yi; Balabanski, Dimiter L. [Extreme Light Infrastructure-Nuclear Physics, “Horia Hulubei” National Institute for Physics and Nuclear Engineering (IFIN-HH), 30 Reactorului, 077125 Bucharest-Magurele (Romania)
2017-03-21
A data-based Monte Carlo simulation algorithm, Geant4-GENBOD, was developed by coupling the n-body Monte-Carlo event generator to the Geant4 toolkit, aiming at accurate simulations of specific photonuclear reactions for diverse photonuclear physics studies. Good comparisons of Geant4-GENBOD calculations with reported measurements of photo-neutron production cross-sections and yields, and with reported energy spectra of the {sup 6}Li(n,α)t reaction were performed. Good agreements between the calculations and experimental data were found and the validation of the developed program was verified consequently. Furthermore, simulations for the {sup 92}Mo(γ,p) reaction of astrophysics relevance and photo-neutron production of {sup 99}Mo/{sup 99m}Tc and {sup 225}Ra/{sup 225}Ac radioisotopes were investigated, which demonstrate the applicability of this program. We conclude that the Geant4-GENBOD is a reliable tool for study of the emerging experiment programs at high-intensity γ-beam laboratories, such as the Extreme Light Infrastructure – Nuclear Physics facility and the High Intensity Gamma-Ray Source at Duke University.
Random number generators in support of Monte Carlo problems in physics
International Nuclear Information System (INIS)
Dyadkin, I.G.
1993-01-01
The ability to support a modern users' expectations of random number generators to solve problems in physics is analyzed. The capabilities of the newest concepts and the old pseudo-random algorithms are compared. The author is in favor of multiplicative generators. Due to the 64-bit arithmetic of a modern PC, multiplicative generators have a sufficient number of periods (up to 2 62 ) and are quicker to generate and to govern independent sequences for parallel processing. In addition they are able to replicate sub-sequences (without storing their seeds) for each standard trial in any code and to simulate spatial and planar directions and EXP(-x) distributions often needed as ''bricks'' for simulating events in physics. Hundreds of multipliers for multiplicative generators have been tabulated and tested, and the required speeds have been obtained. (author)
Accurate simulation of ionisation chamber response with the Monte Carlo code PENELOPE
International Nuclear Information System (INIS)
Sempau, Josep; Andreo, Pedro
2011-01-01
Ionisation chambers (IC) are routinely used in hospitals for the dosimetry of the photon and electron beams used for radiotherapy treatments. The determination of absorbed dose to water from the absorbed dose to the air filling the cavity requires the introduction of stopping power ratios and perturbation factors, which account for the disturbance caused by the presence of the chamber. Although this may seem a problem readily amenable to Monte Carlo simulation, the fact is that the accurate determination of IC response has been, for various decades, one of the most important challenges of the simulation of electromagnetic showers. The main difficulty stems from the use of condensed history techniques for electron and positron transport. This approach, which involves grouping a large number of interactions into a single artificial event, is known to produce the so-called interface effects when particles travel across surfaces separating different media. These effects can be sizeable when the electron step length is not negligible compared to the size of the region being crossed, as it is the case with the cavity of an IC. The artefact, which becomes apparent when the chamber response shows a marked dependence on the adopted step size, can be palliated with the use of sophisticated electron transport algorithms. These topics are discussed in the context of the transport model implemented in the PENELOPE code. The degree of violation of the Fano theorem for a simple, planar geometry, is used as a measure of the stability of the algorithm with respect to variations of the electron step length, thus assessing the 'quality' of its condensed history scheme. It is shown that, with a suitable choice of transport parameters, PENELOPE simulates IC response with an accuracy of the order of 0.1%.
Accurate simulation of ionization chamber response with the Monte Carlo code PENELOPE
International Nuclear Information System (INIS)
Sempau, Josep
2010-01-01
Full text. Ionization chambers (IC) are routinely used in hospitals for the dosimetry of the photon and electron beams used for radiotherapy treatments. The determination of absorbed dose to water from the absorbed dose to the air filling the cavity requires the introduction of stopping power ratios and perturbation factors, which account for the disturbance caused by the presence of the chamber. Although this may seem a problem readily amenable to Monte Carlo simulation, the fact is that the accurate determination of IC response has been, during the last 20 years, one of the most important challenges of the simulation of electromagnetic showers. The main difficulty stems from the use of condensed history techniques for electron and positron transport. This approach, which involves grouping a large number of interactions into a single artificial event, is known to produce the so-called interface effects when particles travel across surfaces separating different media. These effects are extremely important when the electron step length is not negligible compared to the size of the region being crossed, as it is the case with the cavity of an IC. The artifact, which becomes apparent when the chamber response shows a marked dependence on the adopted step size, can be palliated with the use of sophisticated electron transport algorithms. These topics will be discussed in the context of the transport model implemented in the Penelope code. The degree of violation of the Fano theorem for a simple, planar geometry, will be used as a measure of the stability of the algorithm with respect to variations of the electron step length, thus assessing the 'quality' of its condensed history scheme. It will be shown that, with a suitable choice of transport parameters, Penelope can simulate IC response with an accuracy of the order of 0.1%. (author)
Energy Technology Data Exchange (ETDEWEB)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)
2015-12-15
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry
Khezripour, S.; Negarestani, A.; Rezaie, M. R.
2017-08-01
Micromegas detector has recently been used for high-energy neutron (HEN) detection, but the aim of this research is to investigate the response of the Micromegas detector to low-energy neutron (LEN). For this purpose, a Micromegas detector (with air, P10, BF3, 3He and Ar/BF3 mixture) was optimized for the detection of 60 keV neutrons using the MCNP (Monte Carlo N Particle) code. The simulation results show that the optimum thickness of the cathode is 1 mm and the optimum of microgrid location is 100 μm above the anode. The output current of this detector for Ar (3%) + BF3 (97%) mixture is greater than the other ones. This mixture is considered as the appropriate gas for the Micromegas neutron detector providing the output current for 60 keV neutrons at the level of 97.8 nA per neutron. Consecuently, this detector can be introduced as LEN detector.
HIJET: a Monte Carlo event generator for P-nucleus-nucleus collisions
International Nuclear Information System (INIS)
Ludlam, T.; Pfoh, A.; Shor, A.
1985-01-01
Comparisons are shown for the HIJET generated data and measured data for average multiplicities, rapidity distributions, and leading proton spectra in proton-nucleus and heavy ion reactions. The algorithm for the generator is one of an incident particle on a target of uniformly distributed nucleons. The dynamics of the interaction limit secondary interactions in that only the leading baryon may re-interact with the nuclear volume. Energy and four momentum are globally conserved in each event. 6 refs., 6 figs
International Nuclear Information System (INIS)
Lee, Jae Bong; Park, Jae Hak; Kim, Hong Deok; Chung, Han Sub; Kim, Tae Ryong
2005-01-01
The growth of AVB wear in Model F steam generator tubes is predicted using the Monte Carlo Method and statistical approaches. The statistical parameters that represent the characteristics of wear growth and wear initiation are derived from In-Service Inspection (ISI) Non-Destructive Evaluation (NDE) data. Based on the statistical approaches, wear growth model are proposed and applied to predict wear distribution at the End Of Cycle (EOC). Probabilistic distributions of the number of wear flaws and maximum wear depth at EOC are obtained from the analysis. Comparing the predicted EOC wear flaw data with the known EOC data the usefulness of the proposed method is examined and satisfactory results are obtained
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Bong; Park, Jae Hak [Chungbuk National Univ., Cheongju (Korea, Republic of); Kim, Hong Deok; Chung, Han Sub; Kim, Tae Ryong [Korea Electtric Power Research Institute, Daejeon (Korea, Republic of)
2005-07-01
The growth of AVB wear in Model F steam generator tubes is predicted using the Monte Carlo Method and statistical approaches. The statistical parameters that represent the characteristics of wear growth and wear initiation are derived from In-Service Inspection (ISI) Non-Destructive Evaluation (NDE) data. Based on the statistical approaches, wear growth model are proposed and applied to predict wear distribution at the End Of Cycle (EOC). Probabilistic distributions of the number of wear flaws and maximum wear depth at EOC are obtained from the analysis. Comparing the predicted EOC wear flaw data with the known EOC data the usefulness of the proposed method is examined and satisfactory results are obtained.
Use and generation of floor response spectra
International Nuclear Information System (INIS)
Ordonez Villalobos, A.
1983-01-01
One of the main objectives of the dynamic analysis of the structures of a nuclear power plant is the determination of the dynamic input that these structures transmit to the equipment and substructures they support, usually given as Floor Response Spectra (FRS). A close collaboration and feedback between the different groups that use and develop the FRS, is considered to be a very important factor in order to adapt the scope and content of the FRS to the precision required for a proper analysis or testing of the equipment; not only for the action of simple events but also for multiple combined actions. These aspects should be evaluated not only in the final stages of qualification of the equipment users schedules do not coincide with the schedules of the analysis group that develops the FRS. Different mechanisms of interchange of information and colaboration are suggested in order to optimize the availability, use and production of FRS. In the aspect of FRS generation, different procedures are reviewed including the direct procedures, not only for FRS but also for secondary FRS that are needed for the evaluation of equipment supported on other equipment or subsystems. It is concluded that in many cases, the direct procedures can be developed economically with the advantage that is easy to take into account the variability not only of the transfer function (including damping, stiffness and modal mass ratio). Different probabilities of excedence levels can be stabilized in order to obtain a more realistic dynamic response of the equipment. These last aspects can contribute to a more flexible procedure for the availability and generation of the FRS. (orig./HP)
Studies of vector boson transverse momentum simulation in Monte Carlo event generators
The ATLAS collaboration
2011-01-01
We present studies of event generator behaviours regarding vector boson production characteristics, in particular the transverse momentum, pT, of the $Z$ boson as measured by ATLAS, for discussion at the LPCC working group meeting on precision electroweak physics at the LHC. The results discussed focus on the poor descriptions of ATLAS $W$ and $Z$ pT spectra by the ATLAS AUET2B LO** tune of PYTHIA6, and by the shower-matched NLO generator combination POWHEG+PYTHIA6. We show that both standalone PYTHIA6 and POWHEG can be made to describe the Sudakov peak of the ATLAS $Z$ pT distribution by tuning of the PYTHIA parton shower -- different approaches are required in each case. Comparisons of other NLO generators to the $Z$ pT data are also shown.
International Nuclear Information System (INIS)
Bendato, Ilaria; Cassettari, Lucia; Mosca, Marco; Mosca, Roberto
2016-01-01
Combining technological solutions with investment profitability is a critical aspect in designing both traditional and innovative renewable power plants. Often, the introduction of new advanced-design solutions, although technically interesting, does not generate adequate revenue to justify their utilization. In this study, an innovative methodology is developed that aims to satisfy both targets. On the one hand, considering all of the feasible plant configurations, it allows the analysis of the investment in a stochastic regime using the Monte Carlo method. On the other hand, the impact of every technical solution on the economic performance indicators can be measured by using regression meta-models built according to the theory of Response Surface Methodology. This approach enables the design of a plant configuration that generates the best economic return over the entire life cycle of the plant. This paper illustrates an application of the proposed methodology to the evaluation of design solutions using an innovative linear Fresnel Concentrated Solar Power system. - Highlights: • A stochastic methodology for solar plants investment evaluation. • Study of the impact of new technologies on the investment results. • Application to an innovative linear Fresnel CSP system. • A particular application of Monte Carlo simulation and response surface methodology.
Energy Technology Data Exchange (ETDEWEB)
Park, Ho Jin; Cho, Jin Young [KAERI, Daejeon (Korea, Republic of); Kim, Kang Seog [Oak Ridge National Laboratory, Oak Ridge (United States); Hong, Ser Gi [Kyung Hee University, Yongin (Korea, Republic of)
2016-05-15
In this study, multi-group cross section libraries for the DeCART code were generated using a new procedure. The new procedure includes generating the RI tables based on the MC calculations, correcting the effective fission product yield calculations, and considering most of the fission products as resonant nuclides. KAERI (Korea Atomic Energy Research Institute) has developed the transport lattice code KARMA (Kernel Analyzer by Ray-tracing Method for fuel Assembly) and DeCART (Deterministic Core Analysis based on Ray Tracing) for a multi-group neutron transport analysis of light water reactors (LWRs). These codes adopt the method of characteristics (MOC) to solve the multi-group transport equation and resonance fixed source problem, the subgroup and the direct iteration method with resonance integral tables for resonance treatment. With the development of the DeCART and KARMA code, KAERI has established its own library generation system for a multi-group transport calculation. In the KAERI library generation system, the multi-group average cross section and resonance integral (RI) table are generated and edited using PENDF (point-wise ENDF) and GENDF (group-wise ENDF) produced by the NJOY code. The new method does not need additional processing because the MC method can handle any geometry information and material composition. In this study, the new method is applied to the dominant resonance nuclide such as U{sup 235} and U{sup 238} and the conventional method is applied to the minor resonance nuclides. To examine the newly generated multi-group cross section libraries, various benchmark calculations such as pin-cell, FA, and core depletion problem are performed and the results are compared with the reference solutions. Overall, the results by the new method agree well with the reference solution. The new procedure based on the MC method were verified and provided the multi-group library that can be used in the SMR nuclear design analysis.
Multi-criteria ranking of energy generation scenarios with Monte Carlo simulation
International Nuclear Information System (INIS)
Baležentis, Tomas; Streimikiene, Dalia
2017-01-01
Highlights: • Two advanced optimization models were applied for EU energy policy scenarios development. • Several advanced MCDA were applied for energy policy scenarios ranking: WASPAS, ARAS, TOPSIS. • A Monte Carlo simulation was applied for sensitivity analysis of scenarios ranking. • New policy insights in terms of energy scenarios forecasting were provided based on research conducted. - Abstract: Integrated Assessment Models (IAMs) are omnipresent in energy policy analysis. Even though IAMs can successfully handle uncertainty pertinent to energy planning problems, they render multiple variables as outputs of the modelling. Therefore, policy makers are faced with multiple energy development scenarios and goals. Specifically, technical, environmental, and economic aspects are represented by multiple criteria, which, in turn, are related to conflicting objectives. Preferences of decision makers need to be taken into account in order to facilitate effective energy planning. Multi-criteria decision making (MCDM) tools are relevant in aggregating diverse information and thus comparing alternative energy planning options. The paper aims at ranking European Union (EU) energy development scenarios based on several IAMs with respect to multiple criteria. By doing so, we account for uncertainty surrounding policy priorities outside the IAM. In order to follow a sustainable approach, the ranking of policy options is based on EU energy policy priorities: energy efficiency improvements, increased use of renewables, reduction in and low mitigations costs of GHG emission. The ranking of scenarios is based on the estimates rendered by the two advanced IAMs relying on different approaches, namely TIAM and WITCH. The data are fed into the three MCDM techniques: the method of weighted aggregated sum/product assessment (WASPAS), the Additive Ratio Assessment (ARAS) method, and technique for order preference by similarity to ideal solution (TOPSIS). As MCDM techniques allow
International Nuclear Information System (INIS)
Zhang Feng; Hou Shuang; Jin Xiuyun
2010-01-01
The process of neutron interaction induced by D-T pulsed neutron generator and 241 Am-Be source was simulated by using Monte Carlo method. It is concluded that the thermal neutron count descend exponentially as the spacing increasing. The smaller porosity was, the smaller the differences between the two sources were. When the porosity reached 40%, the ratio of thermal neutron count generated by D-T pulsed neutron source was much larger than that generated by 241 Am-Be neutron source, and its distribution range was wider. The near spacing selected was 20-30 cm, and that of far spacing was about 60-70 cm. The detection depth by using D-T pulsed neutron source was almost unchanged under condition of the same sapcing, and the sensitivity of measurement to the formation porosity decreases. The results showed that it can not only guarantee the statistic of count, but also improve detection sensitivity and depth at the same time of increasing spacing. Therefore, 241 Am-Be neutron source can be replaced by D-T neutron tube in LWD tool. (authors)
Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction
International Nuclear Information System (INIS)
Aguiar, Pablo; Pino, Francisco; Silva-Rodríguez, Jesús; Pavía, Javier; Ros, Doménec; Ruibal, Álvaro
2014-01-01
Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1 mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0 mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1 mm and higher than 0.8 for rod diameter of 1 mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7 mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the
International Nuclear Information System (INIS)
Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.
2009-01-01
The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL 1 and HVL 2 ) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL 1 and HVL 2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL 1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types
The ATLAS collaboration
2014-01-01
Modeling of the fragmentation and decay of heavy flavor hadrons is compared for four Monte Carlo generators: Pythia8, Pythia6, Herwig ++ and Herwig. Heavy flavor hadron production fractions and fragmentation functions are studied using top-quark pair and high transverse momentum jet samples generated for pp collisions at $\\sqrt{s} = 8$ TeV. The performance of the generators for heavy flavor fragmentation is also validated using e+e− annihilation events generated at $\\sqrt{s} = 91.2$ GeV (for $b$-quarks) and $\\sqrt{s} = 10.53$GeV (for $c$-quarks). In addition, bottom and charm hadron decays for the four generators are compared both to results with EvtGen Monte Carlo model and to experimental measurements.
Les Houches guidebook to Monte Carlo generators for hadron collider physics
International Nuclear Information System (INIS)
Dobbs, Matt A.; Frixione, Stefano; Laenen, Eric; Tollefson, Kirsten
2004-01-01
Recently the collider physics community has seen significant advances in the formalisms and implementations of event generators. This review is a primer of the methods commonly used for the simulation of high energy physics events at particle colliders. We provide brief descriptions, references, and links to the specific computer codes which implement the methods. The aim is to provide an overview of the available tools, allowing the reader to ascertain which tool is best for a particular application, but also making clear the limitations of each tool
Les Houches guidebook to Monte Carlo generators for hadron collider physics
Dobbs, M.A.; Laenen, Eric; Tollefson, K.; Baer, H.; Boos, E.; Cox, B.; Engel, R.; Giele, W.; Huston, J.; Ilyin, S.; Kersevan, B.; Krauss, F.; Kurihara, Y.; Lonnblad, L.; Maltoni, F.; Mangano, M.; Odaka, S.; Richardson, P.; Ryd, A.; Sjostrand, T.; Skands, Peter Z.; Was, Z.; Webber, B.R.; Zeppenfeld, D.
2005-01-01
Recently the collider physics community has seen significant advances in the formalisms and implementations of event generators. This review is a primer of the methods commonly used for the simulation of high energy physics events at particle colliders. We provide brief descriptions, references, and links to the specific computer codes which implement the methods. The aim is to provide an overview of the available tools, allowing the reader to ascertain which tool is best for a particular application, but also making clear the limitations of each tool.
M073: Monte Carlo generated spectra for QA/QC of automated NAA routine
International Nuclear Information System (INIS)
Jackman, K.R.; Biegalski, S.R.
2004-01-01
A quality check for an automated system of analyzing large sets of neutron activated samples has been developed. Activated samples are counted with an HPGe detector, in conjunction with an automated sample changer and spectral analysis tools, controlled by the Canberra GENIE 2K and REXX software. After each sample is acquired and analyzed, a Microsoft Visual Basic program imports the results into a template Microsoft Excel file where the final concentrations, uncertainties, and detection limits are determined. Standard reference materials are included in each set of 40 samples as a standard quality assurance/quality control (QA/QC) test. A select group of sample spectra are also visually reviewed to check the peak fitting routines. A reference spectrum was generated in MCNP 4c2 using an F8, pulse height, tally with a detector model of the actual detector used in counting. The detector model matches the detector resolution, energy calibration, and counting geometry. The generated spectrum also contained a radioisotope matrix that was similar to what was expected in the samples. This spectrum can then be put through the automated system and analyzed along with the other samples. The automated results are then compared to expected results for QA/QC assurance.
Monte Carlo generated spectra for QA/QC of automated NAA routine
International Nuclear Information System (INIS)
Jackman, K.R.; Biegalski, S.R.
2007-01-01
A quality check for an automated system of analyzing large sets of neutron activated samples has been developed. Activated samples are counted with an HPGe detector, in conjunction with an automated sample changer and spectral analysis tools, controlled by the Canberra GENIE 2K and REXX software. After each sample is acquired and analyzed, a Microsoft Visual Basic program imports the results into a template Microsoft Excel file where the final concentrations, uncertainties, and detection limits are determined. Standard reference materials are included in each set of 40 samples as a standard quality assurance/quality control (QA/QC) test. A select group of sample spectra are also visually reviewed to check the peak fitting routines. A reference spectrum was generated in MCNP 4c2 using an F8, pulse-height, tally with a detector model of the actual detector used in counting. The detector model matches the detector resolution, energy calibration, and counting geometry. The generated spectrum also contained a radioisotope matrix that was similar to what was expected in the samples. This spectrum can then be put through the automated system and analyzed along with the other samples. The automated results are then compared to expected results for QA/QC assurance. (author)
Multi-Group Covariance Data Generation from Continuous-Energy Monte Carlo Transport Calculations
International Nuclear Information System (INIS)
Lee, Dong Hyuk; Shim, Hyung Jin
2015-01-01
The sensitivity and uncertainty (S/U) methodology in deterministic tools has been utilized for quantifying uncertainties of nuclear design parameters induced by those of nuclear data. The S/U analyses which are based on multi-group cross sections can be conducted by an simple error propagation formula with the sensitivities of nuclear design parameters to multi-group cross sections and the covariance of multi-group cross section. The multi-group covariance data required for S/U analysis have been produced by nuclear data processing codes such as ERRORJ or PUFF from the covariance data in evaluated nuclear data files. However in the existing nuclear data processing codes, an asymptotic neutron flux energy spectrum, not the exact one, has been applied to the multi-group covariance generation since the flux spectrum is unknown before the neutron transport calculation. It can cause an inconsistency between the sensitivity profiles and the covariance data of multi-group cross section especially in resolved resonance energy region, because the sensitivities we usually use are resonance self-shielded while the multi-group cross sections produced from an asymptotic flux spectrum are infinitely-diluted. In order to calculate the multi-group covariance estimation in the ongoing MC simulation, mathematical derivations for converting the double integration equation into a single one by utilizing sampling method have been introduced along with the procedure of multi-group covariance tally
Directory of Open Access Journals (Sweden)
TEMITOPE RAPHAEL AYODELE
2016-04-01
Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.
Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction
Energy Technology Data Exchange (ETDEWEB)
Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Spain and Grupo de Imaxe Molecular, IDIS, Santiago de Compostela 15706 (Spain); Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Spain and Servei de Física Médica i Protecció Radiológica, Institut Catalá d' Oncologia, Barcelona 08036 (Spain); Silva-Rodríguez, Jesús [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Santiago de Compostela 15706 (Spain); Pavía, Javier [Servei de Medicina Nuclear, Hospital Clínic, Barcelona (Spain); Institut d' Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ros, Doménec [Unitat de Biofísica, Facultat de Medicina, Casanova 143 (Spain); Institut d' Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ruibal, Álvaro [Servicio Medicina Nuclear, CHUS (Spain); Grupo de Imaxe Molecular, Facultade de Medicina (USC), IDIS, Santiago de Compostela 15706 (Spain); Fundación Tejerina, Madrid (Spain); and others
2014-03-15
Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1 mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0 mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1 mm and higher than 0.8 for rod diameter of 1 mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7 mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the
De Beer, R.; Van Ormondt, D.
2014-01-01
Work in context of European Union TRANSACT project. We have developed a Java/JNI/C/Fortran based software application, called MonteCarlo, with which the users can carry out Monte Carlo studies in the field of \\emph{in vivo} MRS. The application is supposed to be used as a tool for supporting the
Monte Carlo event generators in atomic collisions: A new tool to tackle the few-body dynamics
Ciappina, M. F.; Kirchner, T.; Schulz, M.
2010-04-01
We present a set of routines to produce theoretical event files, for both single and double ionization of atoms by ion impact, based on a Monte Carlo event generator (MCEG) scheme. Such event files are the theoretical counterpart of the data obtained from a kinematically complete experiment; i.e. they contain the momentum components of all collision fragments for a large number of ionization events. Among the advantages of working with theoretical event files is the possibility to incorporate the conditions present in a real experiment, such as the uncertainties in the measured quantities. Additionally, by manipulating them it is possible to generate any type of cross sections, specially those that are usually too complicated to compute with conventional methods due to a lack of symmetry. Consequently, the numerical effort of such calculations is dramatically reduced. We show examples for both single and double ionization, with special emphasis on a new data analysis tool, called four-body Dalitz plots, developed very recently. Program summaryProgram title: MCEG Catalogue identifier: AEFV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2695 No. of bytes in distributed program, including test data, etc.: 18 501 Distribution format: tar.gz Programming language: FORTRAN 77 with parallelization directives using scripting Computer: Single machines using Linux and Linux servers/clusters (with cores with any clock speed, cache memory and bits in a word) Operating system: Linux (any version and flavor) and FORTRAN 77 compilers Has the code been vectorised or parallelized?: Yes RAM: 64-128 kBytes (the codes are very cpu intensive) Classification: 2.6 Nature of problem: The code deals with single and double
Development of an integrated response generator for Si/CdTe semiconductor Compton cameras
International Nuclear Information System (INIS)
Odaka, Hirokazu; Sugimoto, Soichiro; Ishikawa, Shin-nosuke; Katsuta, Junichiro; Koseki, Yuu; Fukuyama, Taro; Saito, Shinya; Sato, Rie; Sato, Goro; Watanabe, Shin
2010-01-01
We have developed an integrated response generator based on Monte Carlo simulation for Compton cameras composed of silicon (Si) and cadmium telluride (CdTe) semiconductor detectors. In order to construct an accurate detector response function, the simulation is required to include a comprehensive treatment of the semiconductor detector devices and the data processing system in addition to simulating particle tracking. Although CdTe is an excellent semiconductor material for detection of soft gamma rays, its ineffective charge transport property distorts its spectral response. We investigated the response of CdTe pad detectors in the simulation and present our initial results here. We also performed the full simulation of prototypes of Si/CdTe semiconductor Compton cameras and report on the reproducibility of detection efficiencies and angular resolutions of the cameras, both of which are essential performance parameters of astrophysical instruments.
International Nuclear Information System (INIS)
Yang, Ying-Hsien; Lin, Sue-Jane; Lewis, Charles
2009-01-01
Life Cycle Assessment (LCA) is a rather common tool for reducing environmental impacts while striving for cleaner processes. This method yields reliable information when input data is sufficient; however, in uncertain systems Monte Carlo (MC) simulation is used as a means to compensate for insufficient data. The MC optimization model was constructed from environmental emissions, process parameters and operation constraints. The results of MC optimization allow for the prediction of environmental performance and the opportunity for environmental improvement. The case study presented here focuses on the acidification improvement regarding uncertain emissions and on the available operation of Taiwan's power plants. The boundary definitions of LCA were established for generation, fuel refining and mining. The model was constructed according to objective functional minimization of acidification potential, base loading, fuel cost and generation mix constraints. Scenario simulations are given the different variation of fuel cost ratios for Taiwan. The simulation results indicate that fuel cost was the most important parameter influencing the acidification potential for seven types of fired power. Owing to the low operational loading, coal-fired power is the best alternative for improving acidification. The optimal scenario for acidification improvement occurred at 15% of the fuel cost. The impact decreased from 1.39 to 1.24 kg SO 2 -eq./MWh. This reduction benefit was about 10.5% lower than the reference year. Regarding eco-efficiency at an optimum scenario level of 5%, the eco-efficiency value was - 12.4 $US/kg SO 2 -eq. Considering the environmental and economical impacts, results indicated that the ratio of coal-fired steam turbine should be reduced. (author)
Gilda Garibotti; Daniela Zacharías; Verónica Flores; Sebastián Catriman; Antonella Falconaro; Surpik Kabaradjian; María L. Luque; Beatriz Macedo; Juliana Molina; Carlos Rauque; Matías Soto; Gabriela Vázquez; Rocío Vega; Gustavo Viozzi
2017-01-01
Human relationship with dogs associates with numerous and varied benefits on human health; however, it also presents significant risks. The goal of this study was to describe demographic parameters and characteristics of dog ownership with possible implications on human health and to evaluate the prevalence of dog bites and traffic accidents due to dogs. Interviews were conducted in the neighborhoods of Nuestras Malvinas and Nahuel Hue in San Carlos de Bariloche. The percentage of homes with ...
Monte Carlo simulation of moderator and reflector in coal analyzer based on a D-T neutron generator.
Shan, Qing; Chu, Shengnan; Jia, Wenbao
2015-11-01
Coal is one of the most popular fuels in the world. The use of coal not only produces carbon dioxide, but also contributes to the environmental pollution by heavy metals. In prompt gamma-ray neutron activation analysis (PGNAA)-based coal analyzer, the characteristic gamma rays of C and O are mainly induced by fast neutrons, whereas thermal neutrons can be used to induce the characteristic gamma rays of H, Si, and heavy metals. Therefore, appropriate thermal and fast neutrons are beneficial in improving the measurement accuracy of heavy metals, and ensure that the measurement accuracy of main elements meets the requirements of the industry. Once the required yield of the deuterium-tritium (d-T) neutron generator is determined, appropriate thermal and fast neutrons can be obtained by optimizing the neutron source term. In this article, the Monte Carlo N-Particle (MCNP) Transport Code and Evaluated Nuclear Data File (ENDF) database are used to optimize the neutron source term in PGNAA-based coal analyzer, including the material and shape of the moderator and neutron reflector. The optimized targets include two points: (1) the ratio of the thermal to fast neutron is 1:1 and (2) the total neutron flux from the optimized neutron source in the sample increases at least 100% when compared with the initial one. The simulation results show that, the total neutron flux in the sample increases 102%, 102%, 85%, 72%, and 62% with Pb, Bi, Nb, W, and Be reflectors, respectively. Maximum optimization of the targets is achieved when the moderator is a 3-cm-thick lead layer coupled with a 3-cm-thick high-density polyethylene (HDPE) layer, and the neutron reflector is a 27-cm-thick hemispherical lead layer. Copyright © 2015 Elsevier Ltd. All rights reserved.
Murthy, K. P. N.
2001-01-01
An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...
The ATLAS collaboration
2016-01-01
This note documents the Monte Carlo generators used by the ATLAS collaboration at the start of Run 2 for processes where a $W$ or $Z/\\gamma^*$ boson is produced in association with jets. The available event generators are briefly described and comparisons are made with ATLAS measurements of $W$ or $Z/\\gamma^*$+jets performed with Run 1 data, collected at the centre-of-mass energy of 7 TeV. The model predictions are then compared at the Run 2 centre-of-mass energy of 13~TeV. A comparison is also made with an early Run 2 ATLAS $Z/\\gamma^*$+jets data measurement. Investigations into tuning the parameters of the models and evaluating systematic uncertainties on the Monte Carlo predictions are also presented.
International Nuclear Information System (INIS)
Tomal, A.; Lopez G, A. H.; Santos, J. C.; Costa, P. R.
2014-08-01
In this work, the energy response functions of a Cd Te detector were obtained by Monte Carlo simulation in the energy range from 5 to 150 keV, using the Penelope code. The response functions simulated included the finite detector resolution and the carrier transport. The simulated energy response matrix was validated through comparison with experimental results obtained for radioactive sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a Cd Te detector (model Xr-100-T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the Cd Te exhibit good energy response at low energies (below 40 keV), showing only small distortions on the measured spectra. For energies below about 70 keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by different models from the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieve more accurate spectra from which several qualities parameters (i.e. half-value layer, effective energy and mean energy) can be determined. (Author)
Energy Technology Data Exchange (ETDEWEB)
Tomal, A. [Universidade Federale de Goias, Instituto de Fisica, Campus Samambaia, 74001-970, Goiania, (Brazil); Lopez G, A. H.; Santos, J. C.; Costa, P. R., E-mail: alessandra_tomal@yahoo.com.br [Universidade de Sao Paulo, Instituto de Fisica, Rua du Matao Travessa R. 187, Cidade Universitaria, 05508-090 Sao Paulo (Brazil)
2014-08-15
In this work, the energy response functions of a Cd Te detector were obtained by Monte Carlo simulation in the energy range from 5 to 150 keV, using the Penelope code. The response functions simulated included the finite detector resolution and the carrier transport. The simulated energy response matrix was validated through comparison with experimental results obtained for radioactive sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a Cd Te detector (model Xr-100-T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the Cd Te exhibit good energy response at low energies (below 40 keV), showing only small distortions on the measured spectra. For energies below about 70 keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by different models from the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieve more accurate spectra from which several qualities parameters (i.e. half-value layer, effective energy and mean energy) can be determined. (Author)
Chofor, Ndimofor; Harder, Dietrich; Willborn, Kay; Rühmann, Antje; Poppe, Björn
2011-09-01
The varying low-energy contribution to the photon spectra at points within and around radiotherapy photon fields is associated with variations in the responses of non-water equivalent dosimeters and in the water-to-material dose conversion factors for tissues such as the red bone marrow. In addition, the presence of low-energy photons in the photon spectrum enhances the RBE in general and in particular for the induction of second malignancies. The present study discusses the general rules valid for the low-energy spectral component of radiotherapeutic photon beams at points within and in the periphery of the treatment field, taking as an example the Siemens Primus linear accelerator at 6 MV and 15 MV. The photon spectra at these points and their typical variations due to the target system, attenuation, single and multiple Compton scattering, are described by the Monte Carlo method, using the code BEAMnrc/EGSnrc. A survey of the role of low energy photons in the spectra within and around radiotherapy fields is presented. In addition to the spectra, some data compression has proven useful to support the overview of the behaviour of the low-energy component. A characteristic indicator of the presence of low-energy photons is the dose fraction attributable to photons with energies not exceeding 200 keV, termed P(D)(200 keV). Its values are calculated for different depths and lateral positions within a water phantom. For a pencil beam of 6 or 15 MV primary photons in water, the radial distribution of P(D)(200 keV) is bellshaped, with a wide-ranging exponential tail of half value 6 to 7 cm. The P(D)(200 keV) value obtained on the central axis of a photon field shows an approximately proportional increase with field size. Out-of-field P(D)(200 keV) values are up to an order of magnitude higher than on the central axis for the same irradiation depth. The 2D pattern of P(D)(200 keV) for a radiotherapy field visualizes the regions, e.g. at the field margin, where changes of
Monte Carlo and Quasi-Monte Carlo Sampling
Lemieux, Christiane
2009-01-01
Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.
Energy Technology Data Exchange (ETDEWEB)
Potter, Jr., B.G.; Tikare, V.; Tuttle, B.A.
1999-06-30
A 2-D, lattice-Monte Carlo approach was developed to simulate ferroelectric domain structure. The model currently utilizes a Hamiltonian for the total energy based only upon electrostatic terms involving dipole-dipole interactions, local polarization gradients and the influence of applied electric fields. The impact of boundary conditions on the domain configurations obtained was also examined. In general, the model exhibits domain structure characteristics consistent with those observed in a tetragonally distorted ferroelectric. The model was also extended to enable the simulation of ferroelectric hysteresis behavior. Simulated hysteresis loops were found to be very similar in appearance to those observed experimentally in actual materials. This qualitative agreement between the simulated hysteresis loop characteristics and real ferroelectric behavior was also confirmed in simulations run over a range of simulation temperatures and applied field frequencies.
Monte Carlo Study on Gas Pressure Response of He-3 Tube in Neutron Porosity Logging
Directory of Open Access Journals (Sweden)
TIAN Li-li;ZHANG Feng;WANG Xin-guang;LIU Jun-tao
2016-10-01
Full Text Available Thermal neutrons are detected by (n,p reaction of Helium-3 tube in the compensated neutron logging. The helium gas pressure in the counting area influences neutron detection efficiency greatly, and then it is an important parameter for neutron porosity measurement accuracy. The variation law of counting rates of a near detector and a far one with helium gas pressure under different formation condition was simulated by Monte Carlo method. The results showed that with the increasing of helium pressure the counting rate of these detectors increased firstly and then leveled off. In addition, the neutron counting rate ratio and porosity sensitivity increased slightly, the porosity measurement error decreased exponentially, which improved the measurement accuracy. These research results can provide technical support for selecting the type of Helium-3 detector in developing neutron porosity logging.
Energy Technology Data Exchange (ETDEWEB)
Baltazar R, A.; Vega C, H. R.; Ortiz R, J. M.; Solis S, L. O.; Castaneda M, R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Programa de Doctorado en Ingenieria y Tecnologia Aplicada, Av. Lopez Velarde s/n, 98000 Zacatecas, Zac. (Mexico); Soto B, T. G.; Medina C, D., E-mail: raigosa.antonio@hotmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Programa de Doctorado en Ciencias Basicas (Ciencias Nucleares), Cipres No. 10, Fracc. La Penuela, 98060 Zacatecas, Zac. (Mexico)
2017-10-15
In the last three decades the uses of Monte Carlo methods, for the estimation of physical phenomena associated with the interaction of radiation with matter, have increased considerably. The reason is due to the increase in computing capabilities and the reduction of computer prices. Monte Carlo methods allow modeling and simulating real systems before their construction, saving time and costs. The interaction mechanisms between neutrons and matter are diverse and range from elastic dispersion to nuclear fission; to facilitate the neutrons detection, is necessary to moderate them until reaching electronic equilibrium with the medium at standard conditions of pressure and temperature, in this state the total cross section of the {sup 3}He is large. The objective of the present work was to estimate the response matrix of a proportional detector of {sup 3}He using regular volumes of moderator through Monte Carlo methods. Neutron monoenergetic sources with energies of 10{sup -9} to 20 MeV and polyethylene moderators of different sizes were used. The calculations were made with the MCNP5 code; the number of stories for each detector-moderator combination was large enough to obtain errors less than 1.5%. We found that for small moderators the highest response is obtained for lower energy neutrons, when increasing the moderator dimension we observe that the response decreases for neutrons of lower energy and increases for higher energy neutrons. The total sum of the responses of each moderator allows obtaining a response close to a constant function. (Author)
Embedded generation for industrial demand response in renewable energy markets
International Nuclear Information System (INIS)
Leanez, Frank J.; Drayton, Glenn
2010-01-01
Uncertainty in the electrical energy market is expected to increase with growth in the percentage of generation using renewable resources. Demand response can play a key role in giving stability to system operation. This paper discusses the embedded generation for industrial demand response in renewable energy markets. The methodology of the demand response is explained. It consists of long-term optimization and stochastic optimization. Wind energy, among all the renewable resources, is becoming increasingly popular. Volatility in the wind energy sector is high and this is explained using examples. Uncertainty in the wind market is shown using stochastic optimization. Alternative techniques for generation of wind energy were seen to be needed. Embedded generation techniques include co-generation (CHP) and pump storage among others. These techniques are analyzed and the results are presented. From these results, it is seen that investment in renewables is immediately required and that innovative generation technologies are also required over the long-term.
International Nuclear Information System (INIS)
Zmushko, V.V.; Migdal, A.A.
1987-01-01
A model of triangulated random surfaces which is the discrete analogue of the Polyakov string is considered in the work. An algorithm is proposed which enables one to study the model by means of the Monte Carlo method in the grand canonical ensemble. Preliminary results are presented on the evaluation of the critical index γ
International Nuclear Information System (INIS)
Turner, J.E.; Modolo, J.T.; Sordi, G.M.A.A.; Hamm, R.N.; Wright, H.A.
1979-01-01
PHOEL provides a source term for a Monte Carlo code which calculates the electron transport and energy degradation in liquid water. This code is used to study the relative biological effectiveness (RBE) of low-LET radiation at low doses. The basic numerical data used and their mathematical treatment are described as well as the operation of the code [pt
International Nuclear Information System (INIS)
Kodeli, I.; Tanner, R.
2005-01-01
In the scope of QUADOS, a Concerted Action of the European Commission, eight calculational problems were prepared in order to evaluate the use of computational codes for dosimetry in radiation protection and medical physics, and to disseminate 'good practice' throughout the radiation dosimetry community. This paper focuses on the analysis of the P4 problem on the 'TLD-albedo dosemeter: neutron and/or photon response of a four-element TL-dosemeter mounted on a standard ISO slab phantom'. Altogether 17 solutions were received from the participants, 14 of those transported neutrons and 15 photons. Most participants (16 out of 17) used Monte Carlo methods. These calculations are time-consuming, requiring several days of CPU time to perform the whole set of calculations and achieve good statistical precision. The possibility of using deterministic discrete ordinates codes as an alternative to Monte Carlo was therefore investigated and is presented here. In particular the capacity of the adjoint mode calculations is shown. (authors)
Garibotti, Gilda; Zacharías, Daniela; Flores, Verónica; Catriman, Sebastián; Falconaro, Antonella; Kabaradjian, Surpik; Luque, María L; Macedo, Beatriz; Molina, Juliana; Rauque, Carlos; Soto, Matías; Vázquez, Gabriela; Vega, Rocío; Viozzi, Gustavo
2017-01-01
Human relationship with dogs associates with numerous and varied benefits on human health; however, it also presents significant risks. The goal of this study was to describe demographic parameters and characteristics of dog ownership with possible implications on human health and to evaluate the prevalence of dog bites and traffic accidents due to dogs. Interviews were conducted in the neighborhoods of Nuestras Malvinas and Nahuel Hue in San Carlos de Bariloche. The percentage of homes with at least one dog, the average number of dogs per home, the prevalence of dog bites and traffic accidents due to dogs and the general awareness of the population on dog transmitted zoonoses were estimated. Regarding ownership characteristics, the degree of sterilization, vaccination and parasite control and the percentage of dogs allowed to roam freely in public places were evaluated. A total of 141 interviews were conducted; 87% of the households had at least one dog, with an average of 2.2 dogs. In 26% of the households someone had suffered a traffic accident caused by dogs and in 41% someone had been bitten. Antiparasite treatment was administered to 83% of the dogs in the last 12 months, on average 1.4 times (recommended 6 times), 51% were sterilized, 55% were allowed to roam freely. This study shows a disturbing situation regarding the canine population of the evaluated neighborhoods. The number of dogs allowed to roam freely and the low level of parasite control and sterilization provide suitable conditions for the spread of zoonoses.
Calculating CR-39 Response to Radon in Water Using Monte Carlo Simulation
International Nuclear Information System (INIS)
Razaie Rayeni Nejad, M. R.
2012-01-01
CR-39 detectors are widely used for Radon and progeny measurement in the air. In this paper, using the Monte Carlo simulation, the possibility of using the CR-39 for direct measurement of Radon and progeny in water is investigated. Assuming the random position and angle of alpha particle emitted by Radon and progeny, alpha energy and angular spectrum that arrive at CR-39, the calibration factor, and the suitable depth of chemical etching of CR-39 in air and water was calculated. In this simulation, a range of data were obtained from SRIM2008 software. Calibration factor of CR-39 in water is calculated as 6.6 (kBq.d/m 3 )/(track/cm 2 ) that is corresponding with EPA standard level of Radon concentration in water (10-11 kBq/m 3 ). With replacing the skin instead of CR-39, the volume affected by Radon and progeny was determined to be 2.51 mm 3 for one m 2 of skin area. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/(Bq.h/m 3 ). Using the CR-39 for Radon measurement in water can be beneficial. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/ (Bq.h/m 3 ).
Directory of Open Access Journals (Sweden)
Gilda Garibotti
2017-08-01
Full Text Available Human relationship with dogs associates with numerous and varied benefits on human health; however, it also presents significant risks. The goal of this study was to describe demographic parameters and characteristics of dog ownership with possible implications on human health and to evaluate the prevalence of dog bites and traffic accidents due to dogs. Interviews were conducted in the neighborhoods of Nuestras Malvinas and Nahuel Hue in San Carlos de Bariloche. The percentage of homes with at least one dog, the average number of dogs per home, the prevalence of dog bites and traffic accidents due to dogs and the general awareness of the population on dog transmitted zoonoses were estimated. Regarding ownership characteristics, the degree of sterilization, vaccination and parasite control and the percentage of dogs allowed to roam freely in public places were evaluated. A total of 141 interviews were conducted; 87% of the households had at least one dog, with an average of 2.2 dogs. In 26% of the households someone had suffered a traffic accident caused by dogs and in 41% someone had been bitten. Antiparasite treatment was administered to 83% of the dogs in the last 12 months, on average 1.4 times (recommended 6 times, 51% were sterilized, 55% were allowed to roam freely. This study shows a disturbing situation regarding the canine population of the evaluated neighborhoods. The number of dogs allowed to roam freely and the low level of parasite control and sterilization provide suitable conditions for the spread of zoonoses.
International Nuclear Information System (INIS)
Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho
2015-01-01
This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies
International Nuclear Information System (INIS)
Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun
2014-01-01
It uses the deterministic method to calculate adjoint fluxes for the decision of the parameters used in the variance reductions. This is called as hybrid Monte Carlo method. The CADIS method, however, has a limitation to reduce the stochastic errors of all responses. The Forward Weighted CADIS (FW-CADIS) was introduced to solve this problem. To reduce the overall stochastic errors of the responses, the forward flux is used. In the previous study, the Multi-Response CADIS (MR-CAIDS) method was derived for minimizing sum of each squared relative error. In this study, the characteristic of the MR-CADIS method was evaluated and compared with the FW-CADIS method. In this study, how the CADIS, FW-CADIS, and MR-CADIS methods are applied to optimize and decide the parameters used in the variance reduction techniques was analyzed. The MR-CADIS Method uses a technique that the sum of squared relative error in each tally region was minimized to achieve uniform uncertainty. To compare the simulation efficiency of the methods, a simple shielding problem was evaluated. Using FW-CADIS method, it was evaluated that the average of the relative errors was minimized; however, MR-CADIS method gives a lowest variance of the relative errors. Analysis shows that, MR-CADIS method can efficiently and uniformly reduce the relative error of the plural response problem than FW-CADIS method
Systematic evaluations of probabilistic floor response spectrum generation
International Nuclear Information System (INIS)
Lilhanand, K.; Wing, D.W.; Tseng, W.S.
1985-01-01
The relative merits of the current methods for direct generation of probabilistic floor response spectra (FRS) from the prescribed design response spectra (DRS) are evaluated. The explicit probabilistic methods, which explicitly use the relationship between the power spectral density function (PSDF) and response spectra (RS), i.e., the PSDF-RS relationship, are found to have advantages for practical applications over the implicit methods. To evaluate the accuracy of the explicit methods, the root-mean-square (rms) response and the peak factor contained in the PSDF-RS relationship are systematically evaluated, especially for the narrow-band floor spectral response, by comparing the analytical results with simulation results. Based on the evaluation results, a method is recommended for practical use for the direct generation of probabilistic FRS. (orig.)
Monte Carlo principles and applications
Energy Technology Data Exchange (ETDEWEB)
Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center
1976-03-01
The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.
International Nuclear Information System (INIS)
Garshasbi, Samira; Kurnitski, Jarek; Mohammadi, Yousef
2016-01-01
Graphical abstract: The energy consumption and renewable generation in a cluster of NZEBs are modeled by a novel hybrid Genetic Algorithm and Monte Carlo simulation approach and used for the prediction of instantaneous and cumulative net energy balances and hourly amount of energy taken from and supplied to the central energy grid. - Highlights: • Hourly energy consumption and generation by a cluster of NZEBs was simulated. • Genetic Algorithm and Monte Carlo simulation approach were employed. • Dampening effect of energy used by a cluster of buildings was demonstrated. • Hourly amount of energy taken from and supplied to the grid was simulated. • Results showed that NZEB cluster was 63.5% grid dependant on annual bases. - Abstract: Employing a hybrid Genetic Algorithm (GA) and Monte Carlo (MC) simulation approach, energy consumption and renewable energy generation in a cluster of Net Zero Energy Buildings (NZEBs) was thoroughly investigated with hourly simulation. Moreover, the cumulative energy consumption and generation of the whole cluster and each individual building within the simulation space were accurately monitored and reported. The results indicate that the developed simulation algorithm is able to predict the total instantaneous and cumulative amount of energy taken from and supplied to the central energy grid over any time period. During the course of simulation, about 60–100% of total daily generated renewable energy was consumed by NZEBs and up to 40% of that was fed back into the central energy grid as surplus energy. The minimum grid dependency of the cluster was observed in June and July where 11.2% and 9.9% of the required electricity was supplied from the central energy grid, respectively. On the other hand, the NZEB cluster was strongly grid dependant in January and December by importing 70.7% and 76.1% of its required energy demand via the central energy grid, in the order given. Simulation results revealed that the cluster was 63
Khajepour, Abolhasan; Rahmani, Faezeh
2017-01-01
In this study, a 90 Sr radioisotope thermoelectric generator (RTG) with power of milliWatt was designed to operate in the determined temperature (300-312K). For this purpose, the combination of analytical and Monte Carlo methods with ANSYS and COMSOL software as well as the MCNP code was used. This designed RTG contains 90 Sr as a radioisotope heat source (RHS) and 127 coupled thermoelectric modules (TEMs) based on bismuth telluride. Kapton (2.45mm in thickness) and Cryotherm sheets (0.78mm in thickness) were selected as the thermal insulators of the RHS, as well as a stainless steel container was used as a generator chamber. The initial design of the RHS geometry was performed according to the amount of radioactive material (strontium titanate) as well as the heat transfer calculations and mechanical strength considerations. According to the Monte Carlo simulation performed by the MCNP code, approximately 0.35 kCi of 90 Sr is sufficient to generate heat power in the RHS. To determine the optimal design of the RTG, the distribution of temperature as well as the dissipated heat and input power to the module were calculated in different parts of the generator using the ANSYS software. Output voltage according to temperature distribution on TEM was calculated using COMSOL. Optimization of the dimension of the RHS and heat insulator was performed to adapt the average temperature of the hot plate of TEM to the determined hot temperature value. This designed RTG generates 8mW in power with an efficiency of 1%. This proposed approach of combination method can be used for the precise design of various types of RTGs. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Wind Generation Participation in Power System Frequency Response: Preprint
Energy Technology Data Exchange (ETDEWEB)
Gevorgian, Vahan; Zhang, Yingchen
2017-01-01
The electrical frequency of an interconnected power system must be maintained close its nominal level at all times. Excessive under- and overfrequency excursions can lead to load shedding, instability, machine damage, and even blackouts. There is a rising concern in the electric power industry in recent years about the declining amount of inertia and primary frequency response (PFR) in many interconnections. This decline may continue due to increasing penetrations of inverter-coupled generation and the planned retirements of conventional thermal plants. Inverter-coupled variable wind generation is capable of contributing to PFR and inertia with a response that is different from that of conventional generation. It is not yet entirely understood how such a response will affect the system at different wind power penetration levels. The modeling work presented in this paper evaluates the impact of wind generation's provision of these active power control strategies on a large, synchronous interconnection. All simulations were conducted on the U.S. Western Interconnection with different levels of instantaneous wind power penetrations (up to 80%). The ability of wind power plants to provide PFR - and a combination of synthetic inertial response and PFR - significantly improved the frequency response performance of the system.
A NRESPG Monte Carlo code for the calculation of neutron response functions for gas counters
Energy Technology Data Exchange (ETDEWEB)
Kudo, K; Takeda, N; Fukuda, A [Electrotechnical Lab., Tsukuba, Ibaraki (Japan); Torii, T; Hashimoto, M; Sugita, T; Yang, X; Dietze, G
1996-07-01
In this paper, we show the outline of the NRESPG and some typical results of the response functions and efficiencies of several kinds of gas counters. The cross section data for the several kinds of filled gases and the wall material of stainless steel or aluminum are taken mainly from ENDF/B-IV. The ENDF/B-V for stainless steel is also used to investigate the influence on pulse height spectra of gas counters due to the difference of nuclear data files. (J.P.N.)
Monte carlo calculation of energy-dependent response of high-sensitive neutron monitor, HISENS
International Nuclear Information System (INIS)
Imanaka, Tetsuji; Ebisawa, Tohru; Kobayashi, Keiji; Koide, Hiroaki; Seo, Takeshi; Kawano, Shinji
1988-01-01
A highly sensitive neutron monitor system, HISENS, has been developed to measure leakage neutrons from nuclear facilities. The counter system of HISENS contains a detector bank which consists of ten cylindrical proportional counters filled with 10 atm 3 He gas and a paraffin moderator mounted in an aluminum case. The size of the detector bank is 56 cm high, 66 cm wide and 10 cm thick. It is revealed by a calibration experiment using an 241 Am-Be neutron source that the sensitivity of HISENS is about 2000 times as large as that of a typical commercial rem-counter. Since HISENS is designed to have a high sensitivity in a wide range of neutron energy, the shape of its energy dependent response curve cannot be matched to that of the dose equivalent conversion factor. To estimate dose equivalent values from neutron counts by HISENS, it is necessary to know the energy and angular characteristics of both HISENS and the neutron field. The area of one side of the detector bank is 3700 cm 2 and the detection efficiency in the constant region of the response curve is about 30 %. Thus, the sensitivity of HISENS for this energy range is 740 cps/(n/cm 2 /sec). This value indicates the extremely high sensitivity of HISENS as compared with exsisting highly sensitive neutron monitors. (Nogami, K.)
International Nuclear Information System (INIS)
Candelore, N.R.; Kerrick, W.E.; Johnson, E.G.; Gast, R.C.; Dei, D.E.; Fields, D.L.
1982-09-01
The PACER Monte Carlo program for the CDC-7600 performs fixed source or eigenvalue calculations of spatially dependent neutron spectra in rod-lattice geometries. The neutron flux solution is used to produce few group, flux-weighted cross sections spatially averaged over edit regions. In general, PACER provides environmentally dependent flux-weighted few group microscopic cross sections which can be made time (depletion) dependent. These cross sections can be written in a standard POX output file format. To minimize computer storage requirements, PACER allows separate spectrum and edit options. PACER also calculates an explicit (n, 2n) cross section. The PACER geometry allows multiple rod arrays with axial detail. This report provides details of the neutron kinematics and the input required
Directory of Open Access Journals (Sweden)
Anna Russo
Full Text Available Short peptides can be designed in silico and synthesized through automated techniques, making them advantageous and versatile protein binders. A number of docking-based algorithms allow for a computational screening of peptides as binders. Here we developed ex-novo peptides targeting the maltose site of the Maltose Binding Protein, the prototypical system for the study of protein ligand recognition. We used a Monte Carlo based protocol, to computationally evolve a set of octapeptides starting from a polialanine sequence. We screened in silico the candidate peptides and characterized their binding abilities by surface plasmon resonance, fluorescence and electrospray ionization mass spectrometry assays. These experiments showed the designed binders to recognize their target with micromolar affinity. We finally discuss the obtained results in the light of further improvement in the ex-novo optimization of peptide based binders.
Monte Carlo Study of the abBA Experiment: Detector Response and Physics Analysis.
Frlež, E
2005-01-01
The abBA collaboration proposes to conduct a comprehensive program of precise measurements of neutron β-decay coefficients a (the correlation between the neutrino momentum and the decay electron momentum), b (the electron energy spectral distortion term), A (the correlation between the neutron spin and the decay electron momentum), and B (the correlation between the neutron spin and the decay neutrino momentum) at a cold neutron beam facility. We have used a GEANT4-based code to simulate the propagation of decay electrons and protons in the electromagnetic spectrometer and study the energy and timing response of a pair of Silicon detectors. We used these results to examine systematic effects and find the uncertainties with which the physics parameters a, b, A, and B can be extracted from an over-determined experimental data set.
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in
Brusch, Michael; Baier, Daniel
The usage and the estimation of price response function is very important for strategic marketing decisions. Typically price response functions with an empirical basis are used. However, such price response functions are subject to a lot of disturbing influence factors, e.g., the assumed profit maximum price and the assumed corresponding quantity of sales. In such cases, the question how stable the found price response function is was not answered sufficiently up to now. In this paper, the question will be pursued how much (and what kind of) errors in market research are pardonable for a stable price response function. For the comparisons, a factorial design with synthetically generated and disturbed data is used.
Rossi, G; Fajardo, P; Morse, J
1999-01-01
We present Monte Carlo computer simulations of the X-ray response of a micro-strip germanium detector over the energy range 30-100 keV. The detector consists of a linear array of lithographically defined 150 mu m wide strips on a high purity monolithic germanium crystal of 6 mm thickness. The simulation code is divided into two parts. We first consider a 10 mu m wide X-ray beam striking the detector surface at normal incidence and compute the interaction processes possible for each photon. Photon scattering and absorption inside the detector crystal are simulated using the EGS4 code with the LSCAT extension for low energies. A history of events is created of the deposited energies which is read by the second part of the code which computes the energy histogram for each detector strip. Appropriate algorithms are introduced to account for lateral charge spreading occurring during charge carrier drift to the detector surface, and Fano and preamplifier electronic noise contributions. Computed spectra for differen...
Generation of floor response spectra for PFBR RCB
International Nuclear Information System (INIS)
Sajish, S.D.; Ramakrishna, V.; Chellapandi, P.; Chetal, S.C.
2003-01-01
This paper describes the generation of floor time histories and corresponding floor response spectrums at various locations in reactor containment building (RCB) for 500 MWe Prototype Fast Breeder Reactor (PFBR). The RCB and its internal structures are modeled with equivalent 3D-beam elements (stick model), which have got the essential global stiffness and inertial properties of the corresponding building. The main aspect in the simulation of beam model is derivation of equivalent cross sectional properties such as bending, torsional and shear rigidities including shear centers. These properties have been obtained through 3D plate/shell element models with appropriate kinematic constraints, for the zones between floors of corresponding buildings. The stick model includes a set of springs and dampers to simulate soil effects, on which base raft and various sticks are mounted. The soil stiffness and damping values are derived based on equations given in ASCE-98. Time history analysis has been done using three uncorrelated time histories, which are derived from the site dependent design response spectra. Floor time histories (FTH) are extracted at important locations from which the corresponding floor response spectrums (FRS) have been generated for various damping values. Peak broadening of the response spectrums has been done according ASCE criteria. Floor response spectrum corresponds to reactor assembly support shows amplification 2.5 for SSE and 3 for OBE. CASTEM 3M is used for seismic analysis and generation of FRS. (author)
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Monte-Carlo simulation of primary electrons in the matter for the generation of x-rays
International Nuclear Information System (INIS)
Bendjama, H.; Laib, Y.; Allag, A.; Drai, R.
2006-01-01
The x-rays imagining chains components from the source to the detector, rest on the first part of simulation to the energy production of x-rays emission (source), which suggest us to identified the losses energies result from interaction between the fast electrons and the particles of metal : the energies losses due to 'collisional losses' (ionization, excitation) and radiative losses. For the medium and the primary electron energy which interests us, the electrons slowing down in the matter results primarily from the inelastic collisions; whose interest is to have to simulate the x-rays characteristic spectrum. We used a Monte-Carlo method to simulate the energy loss and the transport of primary electrons. This type of method requires only the knowledge of the cross sections attached to the description of all the elementary events. In this work, we adopted the differential cross section of Mott and the total cross section of inner-shell ionization according to the formulation of Gryzinski, to simulate the energy loss and the transport of primary electrons respectively. The simulation allows to follow the electrons until their energy reaches the atomic ionization potential of the irradiated matter. The differential cross section of Mott gives us a very good representation of the pace of the distribution of the energy losses. The transport of primary electron is approximately reproduced
International Nuclear Information System (INIS)
Deepa, A.K.; Jakhete, A.P.; Mehta, D.; Kaushik, C.P.
2011-01-01
High Level Liquid waste (HLW) generated during reprocessing of spent fuel contains most of the radioactivity present in the spent fuel resulting in the need for isolation and surveillance for extended period of time. Major components in HLW are the corrosion products, fission products such as 137 Cs, 90 Sr, 106 Ru, 144 Ce, 125 Sb etc, actinides and various chemicals used during reprocessing of spent fuel. Fresh HLW having an activity concentration of around 100Ci/l is to be vitrified into borosilicate glass and packed in canisters which are placed in S.S overpacks for better confinement. These overpacks contain around 0.7 Million Curies of activity. Characterisation of activity in HLW and activity profile of radionuclides for various cooling periods sets the base for the study. For transporting the vitrified waste product (VWP), two most important parameters is the shield thickness of the transportation cask and the heat generation in the waste product. This paper describes the methodology used in the estimation of lead thickness for the transportation cask using the Monte Carlo Technique. Heat generation due to decay of fission products results in the increase in temperature of the vitrified waste product during interim storage and disposal. Glass being the material, not having very high thermal conductivity, temperature difference between the canister and surrounding bears significance in view of the possibility of temperature based devitrification of VWP. The heat generation in the canister and the overpack containing vitrified glass is also estimated using MCNP. (author)
Development of a simple detector response function generation program: The CEARDRFs code
Energy Technology Data Exchange (ETDEWEB)
Wang Jiaxin, E-mail: jwang3@ncsu.edu [Center for Engineering Applications of Radioisotopes (CEAR), Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Wang Zhijian; Peeples, Johanna [Center for Engineering Applications of Radioisotopes (CEAR), Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Yu Huawei [Center for Engineering Applications of Radioisotopes (CEAR), Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); College of Geo-Resources and Information, China University of Petroleum, Qingdao, Shandong 266555 (China); Gardner, Robin P. [Center for Engineering Applications of Radioisotopes (CEAR), Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States)
2012-07-15
A simple Monte Carlo program named CEARDRFs has been developed to generate very accurate detector response functions (DRFs) for scintillation detectors. It utilizes relatively rigorous gamma-ray transport with simple electron transport, and accounts for two phenomena that have rarely been treated: scintillator non-linearity and the variable flat continuum part of the DRF. It has been proven that these physics and treatments work well for 3 Multiplication-Sign 3 Double-Prime and 6 Multiplication-Sign 6 Double-Prime cylindrical NaI detector in CEAR's previous work. Now this approach has been expanded to cover more scintillation detectors with various common shapes and sizes. Benchmark experiments of 2 Multiplication-Sign 2 Double-Prime cylindrical BGO detector and 2 Multiplication-Sign 4 Multiplication-Sign 16 Double-Prime rectangular NaI detector have been carried out at CEAR with various radiactive sources. The simulation results of CEARDRFs have also been compared with MCNP5 calculations. The benchmark and comparison show that CEARDRFs can generate very accurate DRFs (more accurate than MCNP5) at a very fast speed (hundred times faster than MCNP5). The use of this program can significantly increase the accuracy of applications relying on detector spectroscopy like prompt gamma-ray neutron activation analysis, X-ray fluorescence analysis, oil well logging and homeland security. - Highlights: Black-Right-Pointing-Pointer CEARDRF has been developed to generate detector response functions (DRFs) for scintillation detectors a. Black-Right-Pointing-Pointer Generated DRFs are very accurate. Black-Right-Pointing-Pointer Simulation speed is hundreds of times faster than MCNP5. Black-Right-Pointing-Pointer It utilizes rigorous gamma-ray transport with simple electron transport. Black-Right-Pointing-Pointer It also accounts for scintillator non-linearity and the variable flat continuum part.
Development of a simple detector response function generation program: The CEARDRFs code
International Nuclear Information System (INIS)
Wang Jiaxin; Wang Zhijian; Peeples, Johanna; Yu Huawei; Gardner, Robin P.
2012-01-01
A simple Monte Carlo program named CEARDRFs has been developed to generate very accurate detector response functions (DRFs) for scintillation detectors. It utilizes relatively rigorous gamma-ray transport with simple electron transport, and accounts for two phenomena that have rarely been treated: scintillator non-linearity and the variable flat continuum part of the DRF. It has been proven that these physics and treatments work well for 3×3″ and 6×6″ cylindrical NaI detector in CEAR's previous work. Now this approach has been expanded to cover more scintillation detectors with various common shapes and sizes. Benchmark experiments of 2×2″ cylindrical BGO detector and 2×4×16″ rectangular NaI detector have been carried out at CEAR with various radiactive sources. The simulation results of CEARDRFs have also been compared with MCNP5 calculations. The benchmark and comparison show that CEARDRFs can generate very accurate DRFs (more accurate than MCNP5) at a very fast speed (hundred times faster than MCNP5). The use of this program can significantly increase the accuracy of applications relying on detector spectroscopy like prompt gamma-ray neutron activation analysis, X-ray fluorescence analysis, oil well logging and homeland security. - Highlights: ► CEARDRF has been developed to generate detector response functions (DRFs) for scintillation detectors a. ► Generated DRFs are very accurate. ► Simulation speed is hundreds of times faster than MCNP5. ► It utilizes rigorous gamma-ray transport with simple electron transport. ► It also accounts for scintillator non-linearity and the variable flat continuum part.
Energy Technology Data Exchange (ETDEWEB)
Cevallos R, L. E.; Guzman G, K. A.; Gallego, E.; Garcia F, G. [Universidad Politecnica de Madrid, Escuela Tecnica Superior de Ingenieros Industriales, Departamento de Ingenieria Energetica, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Vega C, H. R., E-mail: lenin_cevallos@hotmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)
2017-10-15
The detection of hidden explosive material is very important for national security. Using Monte Carlo methods, with the code MCNP6, several proposed configurations of a detection system with a Deuterium-Deuterium (D-D) generator, in conjunction with NaI (Tl) scintillation detectors, have been evaluated to intercept hidden explosives. The response of the system to various explosive samples such as Rdx and ammonium nitrate are analyzed as the main components of home-military explosives. The D-D generator produces fast neutrons of 2.5 MeV in a maximum field of 10{sup 10} n/s (Dd-110) which is surrounded with high density polyethylene in order to thermalized the fast neutrons making them interact with the sample inspected, giving rise to the emission of gamma rays that generates a characteristic spectrum of the elements that constitute it, being able in this way to determine its chemical composition and identify the type of substance. The necessary shielding is evaluated to estimate the admissible operation dose, with thicknesses of lead and borated polyethylene, in order to place it at some point of the Laboratory of Neutron Measurements of the Polytechnic University of Madrid where the shielding is optimal. The results show that its functionality is promising in the field of national security for the explosives inspection. (Author)
An electricity generation planning model incorporating demand response
International Nuclear Information System (INIS)
Choi, Dong Gu; Thomas, Valerie M.
2012-01-01
Energy policies that aim to reduce carbon emissions and change the mix of electricity generation sources, such as carbon cap-and-trade systems and renewable electricity standards, can affect not only the source of electricity generation, but also the price of electricity and, consequently, demand. We develop an optimization model to determine the lowest cost investment and operation plan for the generating capacity of an electric power system. The model incorporates demand response to price change. In a case study for a U.S. state, we show the price, demand, and generation mix implications of a renewable electricity standard, and of a carbon cap-and-trade policy with and without initial free allocation of carbon allowances. This study shows that both the demand moderating effects and the generation mix changing effects of the policies can be the sources of carbon emissions reductions, and also shows that the share of the sources could differ with different policy designs. The case study provides different results when demand elasticity is excluded, underscoring the importance of incorporating demand response in the evaluation of electricity generation policies. - Highlights: ► We develop an electric power system optimization model including demand elasticity. ► Both renewable electricity and carbon cap-and-trade policies can moderate demand. ► Both policies affect the generation mix, price, and demand for electricity. ► Moderated demand can be a significant source of carbon emission reduction. ► For cap-and-trade policies, initial free allowances change outcomes significantly.
Amir, Sahar Z.
2013-01-01
expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding
TIMOC-ESP, Time-Dependent Response Function by Monte-Carlo with Interface to Program TIMOC-72
International Nuclear Information System (INIS)
Jaarsma, R.; Perlando, J.M.; Rief, H.
1981-01-01
1 - Description of problem or function: TIMOC-ESP is an 'Event Scanning Program' to analyse the events (collision or boundary crossing parameters) of Monte Carlo particle transport problems. It is a modular program and belongs to the TIMOC code system. Whilst TIMOC-72 deals with stationary problems, the time-dependence is dealt with in ESP. TIMOC-ESP is primarily designed to calculate the time-dependent response functions such as energy-dependent fluxes and currents at interfaces. 2 - Method of solution: The output of TIMOC-72 is transferred to TIMOC-ESP using a data set which acts as an interface between the two programs. Time dependent transport events are sampled at each crossing of any specified boundary in TIMOC. TIMOC-72 provides the parameters for ESP which are: - time of the event; - neutron weight; - cosine of the angle between the flight direction and the normal to the surface; - the indices of both regions; - the history number. Fundamentally, three time options are permitted by ESP, which give the current, the angular flux and the time-integrated flux functions between two specified regions. An eventual extension to other quantities is simple and straight- forward - ESP will accept input data for other options such as the calculation of the point flux, the collision density and the flux derived from this estimator, but the coding required for these calculations has yet to be implemented (1977). 3 - Restrictions on the complexity of the problem: The number of parameters must be between 5 and 50. The number of time intervals is at most 50
Robust network topologies for generating switch-like cellular responses.
Directory of Open Access Journals (Sweden)
Najaf A Shah
2011-06-01
Full Text Available Signaling networks that convert graded stimuli into binary, all-or-none cellular responses are critical in processes ranging from cell-cycle control to lineage commitment. To exhaustively enumerate topologies that exhibit this switch-like behavior, we simulated all possible two- and three-component networks on random parameter sets, and assessed the resulting response profiles for both steepness (ultrasensitivity and extent of memory (bistability. Simulations were used to study purely enzymatic networks, purely transcriptional networks, and hybrid enzymatic/transcriptional networks, and the topologies in each class were rank ordered by parametric robustness (i.e., the percentage of applied parameter sets exhibiting ultrasensitivity or bistability. Results reveal that the distribution of network robustness is highly skewed, with the most robust topologies clustering into a small number of motifs. Hybrid networks are the most robust in generating ultrasensitivity (up to 28% and bistability (up to 18%; strikingly, a purely transcriptional framework is the most fragile in generating either ultrasensitive (up to 3% or bistable (up to 1% responses. The disparity in robustness among the network classes is due in part to zero-order ultrasensitivity, an enzyme-specific phenomenon, which repeatedly emerges as a particularly robust mechanism for generating nonlinearity and can act as a building block for switch-like responses. We also highlight experimentally studied examples of topologies enabling switching behavior, in both native and synthetic systems, that rank highly in our simulations. This unbiased approach for identifying topologies capable of a given response may be useful in discovering new natural motifs and in designing robust synthetic gene networks.
Generation of equipment response spectrum considering equipment-structure interaction
International Nuclear Information System (INIS)
Lee, Sang Hoon; Yoo, Kwang Hoon
2005-01-01
Floor response spectra for dynamic response of subsystem such as equipment, or piping in nuclear power plant are usually generated without considering dynamic interaction between main structure and subsystem. Since the dynamic structural response generally has the narrow-banded shapes, the resulting floor response spectra developed for various locations in the structure usually have high spectral peak amplitudes in the narrow frequency bands corresponding to the natural frequencies of the structural system. The application of such spectra for design of subsystems often leads to excessive design conservatisms, especially when the equipment frequency and structure are at resonance condition. Thus, in order to provide a rational and realistic design input for dynamic analysis and design of equipment, dynamic equipment-structure interaction (ESI) should be considered in developing equipment response spectrum which is particularly important for equipment at the resonance condition. Many analytical methods have been proposed in the past for developing equipment response spectra considering ESI. However, most of these methods have not been adapted to the practical applications because of either the complexities or the lack of rigorousness of the methods. At one hand, mass ratio among the equipment and structure was used as an important parameter to obtain equipment response spectra. Similarly, Tseng has also proposed the analytical method for developing equipment response spectra using mass ratio in the frequency domain. This method is analytically rigorous and can be easily validated. It is based on the dynamic substructuring method as applied to the dynamic soil-structure interaction (SSI) analysis, and can relatively easily be implemented for practical applications without to change the current dynamic analysis and design practice for subsystems. The equipment response spectra derived in this study are also based on Tseng's proposed method
International Nuclear Information System (INIS)
Parsons, David; Robar, James L.; Sawkey, Daren
2014-01-01
Purpose: The focus of this work was the demonstration and validation of VirtuaLinac with clinical photon beams and to investigate the implementation of low-Z targets in a TrueBeam linear accelerator (Linac) using Monte Carlo modeling. Methods: VirtuaLinac, a cloud based web application utilizing Geant4 Monte Carlo code, was used to model the Linac treatment head components. Particles were propagated through the lower portion of the treatment head using BEAMnrc. Dose distributions and spectral distributions were calculated using DOSXYZnrc and BEAMdp, respectively. For validation, 6 MV flattened and flattening filter free (FFF) photon beams were generated and compared to measurement for square fields, 10 and 40 cm wide and at d max for diagonal profiles. Two low-Z targets were investigated: a 2.35 MeV carbon target and the proposed 2.50 MeV commercial imaging target for the TrueBeam platform. A 2.35 MeV carbon target was also simulated in a 2100EX Clinac using BEAMnrc. Contrast simulations were made by scoring the dose in the phosphor layer of an IDU20 aSi detector after propagating through a 4 or 20 cm thick phantom composed of water and ICRP bone. Results: Measured and modeled depth dose curves for 6 MV flattened and FFF beams agree within 1% for 98.3% of points at depths greater than 0.85 cm. Ninety three percent or greater of points analyzed for the diagonal profiles had a gamma value less than one for the criteria of 1.5 mm and 1.5%. The two low-Z target photon spectra produced in TrueBeam are harder than that from the carbon target in the Clinac. Percent dose at depth 10 cm is greater by 3.6% and 8.9%; the fraction of photons in the diagnostic energy range (25–150 keV) is lower by 10% and 28%; and contrasts are lower by factors of 1.1 and 1.4 (4 cm thick phantom) and 1.03 and 1.4 (20 cm thick phantom), for the TrueBeam 2.35 MV/carbon and commercial imaging beams, respectively. Conclusions: VirtuaLinac is a promising new tool for Monte Carlo modeling of novel
Energy Technology Data Exchange (ETDEWEB)
Velo, A.F.; Alvarez, A.G.; Carvalho, D.V.S.; Fernandez, V.; Somessari, S.; Sprenger, F.F.; Hamada, M.M.; Mesquita, C.H., E-mail: chmesqui@usp.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)
2017-07-01
This paper describes the Monte Carlo simulation, using MCNP4C, of a multichannel third generation tomography system containing a two radioactive sources, {sup 192}Ir (316.5 - 468 KeV) and {sup 137}Cs (662 KeV), and a set of fifteen NaI(Tl) detectors, with dimensions of 1 inch diameter and 2 inches thick, in fan beam geometry, positioned diametrically opposite. Each detector moves 10 steps of 0,24 deg , totalizing 150 virtual detectors per projection, and then the system rotate 2 degrees. The Monte Carlo simulation was performed to evaluate the viability of this configuration. For this, a multiphase phantom containing polymethyl methacrylate (PMMA ((ρ ≅ 1.19 g/cm{sup 3})), iron (ρ ≅ 7.874 g/cm{sup 3}), aluminum (ρ ≅ 2.6989 g/cm{sup 3}) and air (ρ ≅ 1.20479E-03 g/cm{sup 3}) was simulated. The simulated number of histories was 1.1E+09 per projection and the tally used were the F8, which gives the pulse height of each detector. The data obtained by the simulation was used to reconstruct the simulated phantom using the statistical iterative Maximum Likelihood Estimation Method Technique (ML-EM) algorithm. Each detector provides a gamma spectrum of the sources, and a pulse height analyzer (PHA) of 10% on the 316.5 KeV and 662 KeV photopeaks was performed. This technique provides two reconstructed images of the simulated phantom. The reconstructed images provided high spatial resolution, and it is supposed that the temporal resolution (spending time for one complete revolution) is about 2.5 hours. (author)
International Nuclear Information System (INIS)
Velo, A.F.; Alvarez, A.G.; Carvalho, D.V.S.; Fernandez, V.; Somessari, S.; Sprenger, F.F.; Hamada, M.M.; Mesquita, C.H.
2017-01-01
This paper describes the Monte Carlo simulation, using MCNP4C, of a multichannel third generation tomography system containing a two radioactive sources, 192 Ir (316.5 - 468 KeV) and 137 Cs (662 KeV), and a set of fifteen NaI(Tl) detectors, with dimensions of 1 inch diameter and 2 inches thick, in fan beam geometry, positioned diametrically opposite. Each detector moves 10 steps of 0,24 deg , totalizing 150 virtual detectors per projection, and then the system rotate 2 degrees. The Monte Carlo simulation was performed to evaluate the viability of this configuration. For this, a multiphase phantom containing polymethyl methacrylate (PMMA ((ρ ≅ 1.19 g/cm 3 )), iron (ρ ≅ 7.874 g/cm 3 ), aluminum (ρ ≅ 2.6989 g/cm 3 ) and air (ρ ≅ 1.20479E-03 g/cm 3 ) was simulated. The simulated number of histories was 1.1E+09 per projection and the tally used were the F8, which gives the pulse height of each detector. The data obtained by the simulation was used to reconstruct the simulated phantom using the statistical iterative Maximum Likelihood Estimation Method Technique (ML-EM) algorithm. Each detector provides a gamma spectrum of the sources, and a pulse height analyzer (PHA) of 10% on the 316.5 KeV and 662 KeV photopeaks was performed. This technique provides two reconstructed images of the simulated phantom. The reconstructed images provided high spatial resolution, and it is supposed that the temporal resolution (spending time for one complete revolution) is about 2.5 hours. (author)
International Nuclear Information System (INIS)
Bhati, S.; Sharma, R.C.; Somasundaram, S.
1979-01-01
A computer program to calculate the response of a 20 cm dia phoswich (3mm thick NaI(Tl) primary detector) to a source of low-energy photons distributed in the lungs of a heterogeneous (MIRD) phantom, approximating ICRP Reference Man, has been developed. Monte Carlo techniques are employed to generate photons and trace their fates in the thorax of MIRD phantom. The acceptable points of photon interactions in skeletal, lung and ordinary tissue are determined by Coleman technique. The photon interactions considered are photoelectric and Compton. The calculations yield the exit photon energy spectrum which is smeared with experimentally determined Gaussian resolution function to convert into pulse-height spectrum observable with the detector. The computer program has provisions for incorporating the effects of iodine K x-ray escape as well as variable intrinsic efficiency of the detector. Computed calibration factors (cpm/μCi integrated over the full spectrum) are given for the phoswich located centrally over and in contact with the chest for several low-energy photon sources distributed uniformly or as points in the lungs of the phantom. The radionuclides considered are 238 Pu, 239 Pu, 241 Am, 244 Cm, 246 Cm, 250 Cf and 103 Pd. Examples of generated exit photon and the corresponding pulse-height spectra are included. The spectral changes observed in these generated spectra, which are also discerned in experimental pulse-height spectra, are discussed in detail. Thus, photopeak energies of 18.4 and 55.5 KeV for Usub(L) x-rays and 241 Am gamma-rays respectively have been observed. It is shown that consideration of the total (i.e. both uncollided and those escaping after collision instead of the uncollided alone) flux of escaping photons improves the calibration factors by about 50% for 239 Pu, 70% for 103 Pd and as much as 340% for 241 Am gamma-rays. In addition, calibration factors are calculated for point 239 Pu sources located at different sites in the phantom lungs
International Nuclear Information System (INIS)
Tashima, Hideaki; Yamaya, Taiga; Hirano, Yoshiyuki; Yoshida, Eiji; Kinouch, Shoko; Watanabe, Mitsuo; Tanaka, Eiichi
2013-01-01
At the National Institute of Radiological Sciences, we are developing OpenPET, an open-type positron emission tomography (PET) geometry with a physically open space, which allows easy access to the patient during PET studies. Our first-generation OpenPET system, dual-ring OpenPET, which consisted of two detector rings, could provide an extended axial field of view (FOV) including the open space. However, for applications such as in-beam PET to monitor the dose distribution in situ during particle therapy, higher sensitivity concentrated on the irradiation field is required rather than a wide FOV. In this report, we propose a second-generation OpenPET geometry, single-ring OpenPET, which can efficiently improve sensitivity while providing the required open space. When the proposed geometry was realized with block detectors, position-dependent degradation of the spatial resolution was expected because it was necessary to arrange the detector blocks in ellipsoidal rings stacked and shifted relative to one another. However, we found by Monte Carlo simulation that the use of depth-of-interaction (DOI) detectors made it feasible to achieve uniform spatial resolution in the FOV. (author)
Public response to the Diablo Canyon Nuclear Generating Station
International Nuclear Information System (INIS)
Pijawka, K.D.
1982-01-01
The authors examine the nature of the public response to the Diablo Canyon Nuclear Generating Station located in San Luis Obispo, California, from the early 1960s to the present. Four distinct phases of public intervention were discerned, based on change in both plant-related issues and in the nature of the antinuclear constituencies in the region. The level of public concern varied both geographically and temporally and is related to the area's social structure, environmental predispositions, and distribution of plant-related economic benefits. External events, such as the prolonged debate over the risk assessment of the seismic hazard and the Three Mile Island accident were found to be important factors in explaining variation in public concern and political response
Public response to the Diablo Canyon Nuclear Generating Station
International Nuclear Information System (INIS)
Pijawka, K.D.
1982-01-01
We examine the nature of the public response to the Diablo Canyon Nuclear Generating Station located in San Luis Obispo, California, from the early 1960s to the present. Four distinct phases of public intervention were discerned, based on change in both plant-related issues and in the nature of the antinuclear constituencies in the region. The level of public concern varied both geographically and temporally and is related to the area's social structure, environmental predispositions, and distribution of plant-related economic benefits. External events, such as the prolonged debate over the risk assessment of the seismic hazard and the Three Mile Island accident were found to be important factors in explaining variation in public concern and political response. (author)
Public response to the Diablo Canyon Nuclear Generating Station
Energy Technology Data Exchange (ETDEWEB)
Pijawka, K D [Arizona State Univ., Tempe (USA)
1982-08-01
We examine the nature of the public response to the Diablo Canyon Nuclear Generating Station located in San Luis Obispo, California, from the early 1960s to the present. Four distinct phases of public intervention were discerned, based on change in both plant-related issues and in the nature of the antinuclear constituencies in the region. The level of public concern varied both geographically and temporally and is related to the area's social structure, environmental predispositions, and distribution of plant-related economic benefits. External events, such as the prolonged debate over the risk assessment of the seismic hazard and the Three Mile Island accident were found to be important factors in explaining variation in public concern and political response.
Amir, Sahar Z.
2013-05-01
We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-01-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration
IRIS Responsiveness to Generation IV Road-map Goals
International Nuclear Information System (INIS)
Carelli, M.D.; Paramonov, D.V.; Petrovic, B.
2002-01-01
workers' exposure. IRIS has indeed a superb safety which makes it an excellent candidate to fulfill the Generation IV goal of no off-site emergency response. Finally, the economic goal includes various factors contributing to cost of electricity and capital at risk. Significant uncertainties exist on capital cost of all Generation IV concepts and a consensus is not reached how well advantages of modularity, simplicity and standardized, multiple fabrication compare with economies of scale. Still, IRIS is expected to have attractive economics because of its modular, simplified design which can be constructed in a three-year period and its small-to-medium size which significantly reduces the financial risk. In summary, IRIS is positively responsive to all Generation IV goals and is an excellent candidate for further development by its large international consortium. (authors)
International Nuclear Information System (INIS)
Brown, F.B.
1981-01-01
Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes
International Nuclear Information System (INIS)
Dupre, Corinne.
1982-10-01
The Monte Carlo method was applied to simulate the transport of a photon beam in an organic liquid scintillation detector. The interactions of secondary gamma rays and electrons with the detector and its peripheral materials components such as the pyrex glass container are included. The pulse height spectra and the detectors efficiency are compared with calculated and measured results. Calculations and programmation methods are presented in the same way as results concerning cobalt and cesium sources [fr
International Nuclear Information System (INIS)
Miller, David W.
2014-01-01
Japan's Fukushima Daiichi severe nuclear accident in March 2011 has resulted in a reassessment of nuclear emergency response and preparedness in Canada. On May 26, 27 and 28, 2014 Ontario Power Generation (OPG) conducted the first North American full scale nuclear emergency response exercise designed to include regional, provincial and federal bodies as well as the utility. This paper describes the radiological aspects of the OPG Exercise Unified Response (ExUR) with emphasis on deployment of new Fukushima equipment on the Darlington site, management of emergency workers deplored in the vicinity of Darlington to collect environmental samples and radiation measurements, performance of dose calculations, communication of dose projections and protective actions to local, provincial and federal agencies and conduct of vehicle, truck and personnel monitoring and decontamination facilities. The ExUR involved more than 1000 personnel from local, provincial and federal bodies. Also, 200 OPG employees participated in the off-site emergency response duties. The objective of the ExUR was to test and enhance the preparedness of the utility (OPG), government and non-government agencies and communities to respond to a nuclear emergency. The types of radiological instrumentation and mobile facilities employed are highlighted in the presentation. The establishment of temporary emergency rooms with 8 beds and treatment facilities to manage potentially contaminated injuries from the nuclear emergency is also described. (author)
Transfer of safety responsibilities to future generations: regulatory tools
International Nuclear Information System (INIS)
Kotra, Janet P.
2008-01-01
In a forward-looking local development plan, Nye County defends a series of principles like safety, equity, and societal acceptability of responsibility (safety being foremost). The Nye County community clearly advocates permanent oversight of facilities. To respond to community requirements the regulators can establish requirements and guidance to ensure that safety obligations that can reasonably be discharged are in fact carried out and that remaining obligations are transferred as responsibly as possible, so that subsequent generations have the maximum flexibility to discharge their responsibility. There are transferred burdens of cost, risk and effort and these need to be at least partially compensated for by ensuring a subsequent transfer of information, resources and continuity of education, skills and research. The US regulatory requirements for disposal in a geological repository set out obligations in terms of land-ownership and control, records maintenance, performance confirmation, post-closure monitoring, monuments and markers, archives and records preservation and post-closure oversight. For the future the Nye County is proposing that there would be a co-ordinated involvement of the county in planning, development, operation and long term monitoring of the repository. They want to encourage the development of a live-work community for the repository workers so that they will be engaged in the local community as well as working at the facility
Energy Technology Data Exchange (ETDEWEB)
Cho, Sung Koo; Choi, Sang Hyoun; Kim, Chan Hyeong [Hanyang Univ., Seoul (Korea, Republic of)
2006-12-15
In Korea, a real-time effective dose measurement system is in development. The system uses 32 high-sensitivity MOSFET dosimeters to measure radiation doses at various organ locations in an anthropomorphic physical phantom. The MOSFET dosimeters are, however, mainly made of silicon and shows some degree of energy and angular dependence especially for low energy photons. This study determines the correction factors to correct for these dependences of the MOSFET dosimeters for accurate measurement of radiation doses at organ locations in the phantom. For this, first, the dose correction factors of MOSFET dosimeters were determined for the energy spectrum in the steam generator channel of the Kori Nuclear Power Plant Unit no.1 by Monte Carlo simulations. Then, the results were compared with the dose correction factors from 0.662 MeV and 1.25 MeV mono-energetic photons. The difference of the dose correction factors were found very negligible ({<=}1.5%), which in general shows that the dose corrections factors determined from 0.662 MeV and 1.25 MeV can be in a steam general channel head of a nuclear power plant. The measured effective dose was generally found to decrease by {approx}7% when we apply the dose correction factors.
Xu, Z.; Mace, G. G.; Posselt, D. J.
2017-12-01
As we begin to contemplate the next generation atmospheric observing systems, it will be critically important that we are able to make informed decisions regarding the trade space between scientific capability and the need to keep complexity and cost within definable limits. To explore this trade space as it pertains to understanding key cloud and precipitation processes, we are developing a Markov Chain Monte Carlo (MCMC) algorithm suite that allows us to arbitrarily define the specifications of candidate observing systems and then explore how the uncertainties in key retrieved geophysical parameters respond to that observing system. MCMC algorithms produce a more complete posterior solution space, and allow for an objective examination of information contained in measurements. In our initial implementation, MCMC experiments are performed to retrieve vertical profiles of cloud and precipitation properties from a spectrum of active and passive measurements collected by aircraft during the ACE Radiation Definition Experiments (RADEX). Focusing on shallow cumulus clouds observed during the Integrated Precipitation and Hydrology EXperiment (IPHEX), observing systems in this study we consider W and Ka-band radar reflectivity, path-integrated attenuation at those frequencies, 31 and 94 GHz brightness temperatures as well as visible and near-infrared reflectance. By varying the sensitivity and uncertainty of these measurements, we quantify the capacity of various combinations of observations to characterize the physical properties of clouds and precipitation.
Optimal Demand Response of Smart Home with PV Generators
Directory of Open Access Journals (Sweden)
Chao-Rong Chen
2014-01-01
Full Text Available Demand response (DR is used mainly to help to schedule a customer’s power utilization based on the electricity price that is announced by the power distribution company so that both demand and supply can optimally benefit. The work proposes a users’ load model and the interior point method for optimal scheduling with elastic power utilization to minimize power price. The interior point method has the advantages of rapid convergence and robustness. Customers can not only use PV generators and battery sets as backup power sources, but also benefit from green energy. As revealed by the results herein, the use of elastic power utilization time intervals enables customers to pay less power price.
Directory of Open Access Journals (Sweden)
2008-05-01
Full Text Available Entrevista (en español Presentación Carlos Romero, politólogo, es profesor-investigador en el Instituto de Estudios Políticos de la Facultad de Ciencias Jurídicas y Políticas de la Universidad Central de Venezuela, en donde se ha desempeñado como coordinador del Doctorado, subdirector y director del Centro de Estudios de Postgrado. Cuenta con ocho libros publicados sobre temas de análisis político y relaciones internacionales, siendo uno de los últimos Jugando con el globo. La política exter...
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
International Nuclear Information System (INIS)
Dubi, A.; Gerstl, S.A.W.
1979-05-01
The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables
Uusijärvi, Helena; Chouin, Nicolas; Bernhardt, Peter; Ferrer, Ludovic; Bardiès, Manuel; Forssell-Aronsson, Eva
2009-08-01
Point kernels describe the energy deposited at a certain distance from an isotropic point source and are useful for nuclear medicine dosimetry. They can be used for absorbed-dose calculations for sources of various shapes and are also a useful tool when comparing different Monte Carlo (MC) codes. The aim of this study was to compare point kernels calculated by using the mixed MC code, PENELOPE (v. 2006), with point kernels calculated by using the condensed-history MC codes, ETRAN, GEANT4 (v. 8.2), and MCNPX (v. 2.5.0). Point kernels for electrons with initial energies of 10, 100, 500, and 1 MeV were simulated with PENELOPE. Spherical shells were placed around an isotropic point source at distances from 0 to 1.2 times the continuous-slowing-down-approximation range (R(CSDA)). Detailed (event-by-event) simulations were performed for electrons with initial energies of less than 1 MeV. For 1-MeV electrons, multiple scattering was included for energy losses less than 10 keV. Energy losses greater than 10 keV were simulated in a detailed way. The point kernels generated were used to calculate cellular S-values for monoenergetic electron sources. The point kernels obtained by using PENELOPE and ETRAN were also used to calculate cellular S-values for the high-energy beta-emitter, 90Y, the medium-energy beta-emitter, 177Lu, and the low-energy electron emitter, 103mRh. These S-values were also compared with the Medical Internal Radiation Dose (MIRD) cellular S-values. The greatest differences between the point kernels (mean difference calculated for distances, electrons was 1.4%, 2.5%, and 6.9% for ETRAN, GEANT4, and MCNPX, respectively, compared to PENELOPE, if omitting the S-values when the activity was distributed on the cell surface for 10-keV electrons. The largest difference between the cellular S-values for the radionuclides, between PENELOPE and ETRAN, was seen for 177Lu (1.2%). There were large differences between the MIRD cellular S-values and those obtained from
Generate floor response spectra, Part 2: Response spectra for equipment-structure resonance
International Nuclear Information System (INIS)
Li, Bo; Jiang, Wei; Xie, Wei-Chau; Pandey, Mahesh D.
2015-01-01
Highlights: • The concept of tRS is proposed to deal with tuning of equipment and structures. • Established statistical approaches for estimating tRS corresponding to given GRS. • Derived a new modal combination rule from the theory of random vibration. • Developed efficient and accurate direct method for generating floor response spectra. - Abstract: When generating floor response spectra (FRS) using the direct spectra-to-spectra method developed in the companion paper, probability distribution of t-response spectrum (tRS), which deals with equipment-structure resonance or tuning, corresponding to a specified ground response spectrum (GRS) is required. In this paper, simulation results using a large number of horizontal and vertical ground motions are employed to establish statistical relationships between tRS and GRS. It is observed that the influence of site conditions on horizontal statistical relationships is negligible, whereas the effect of site conditions on vertical statistical relationships cannot be ignored. Considering the influence of site conditions, horizontal statistical relationship suitable for all site conditions and vertical statistical relationships suitable for hard sites and soft sites, respectively, are established. The horizontal and vertical statistical relationships are suitable to estimate tRS for design spectra in USNRC R.G. 1.60 and NUREG/CR-0098, Uniform Hazard Spectra (UHS) in Western North America (WNA), or any GRS falling inside the valid coverage of the statistical relationship. For UHS with significant high frequency spectral accelerations, such as UHS in Central and Eastern North America (CENA), an amplification ratio method is proposed to estimate tRS. Numerical examples demonstrate that the statistical relationships and the amplification ratio method are acceptable to estimate tRS for given GRS and to generate FRS using the direct method in different practical situations.
Generate floor response spectra, Part 2: Response spectra for equipment-structure resonance
Energy Technology Data Exchange (ETDEWEB)
Li, Bo, E-mail: b68li@uwaterloo.ca; Jiang, Wei, E-mail: w46jiang@uwaterloo.ca; Xie, Wei-Chau, E-mail: xie@uwaterloo.ca; Pandey, Mahesh D., E-mail: mdpandey@uwaterloo.ca
2015-11-15
Highlights: • The concept of tRS is proposed to deal with tuning of equipment and structures. • Established statistical approaches for estimating tRS corresponding to given GRS. • Derived a new modal combination rule from the theory of random vibration. • Developed efficient and accurate direct method for generating floor response spectra. - Abstract: When generating floor response spectra (FRS) using the direct spectra-to-spectra method developed in the companion paper, probability distribution of t-response spectrum (tRS), which deals with equipment-structure resonance or tuning, corresponding to a specified ground response spectrum (GRS) is required. In this paper, simulation results using a large number of horizontal and vertical ground motions are employed to establish statistical relationships between tRS and GRS. It is observed that the influence of site conditions on horizontal statistical relationships is negligible, whereas the effect of site conditions on vertical statistical relationships cannot be ignored. Considering the influence of site conditions, horizontal statistical relationship suitable for all site conditions and vertical statistical relationships suitable for hard sites and soft sites, respectively, are established. The horizontal and vertical statistical relationships are suitable to estimate tRS for design spectra in USNRC R.G. 1.60 and NUREG/CR-0098, Uniform Hazard Spectra (UHS) in Western North America (WNA), or any GRS falling inside the valid coverage of the statistical relationship. For UHS with significant high frequency spectral accelerations, such as UHS in Central and Eastern North America (CENA), an amplification ratio method is proposed to estimate tRS. Numerical examples demonstrate that the statistical relationships and the amplification ratio method are acceptable to estimate tRS for given GRS and to generate FRS using the direct method in different practical situations.
Santarelli, R; Maurizi, M; Conti, G; Ottaviani, F; Paludetti, G; Pettorossi, V E
1995-03-01
In order to investigate the generation of the 40 Hz steady-state response (SSR), auditory potentials evoked by clicks were recorded in 16 healthy subjects in two stimulating conditions. Firstly, repetition rates of 7.9 and 40 Hz were used to obtain individual middle latency responses (MLRs) and 40 Hz-SSRs, respectively. In the second condition, eight click trains were presented at a 40 Hz repetition rate and an inter-train interval of 126 ms. We extracted from the whole train response: (1) the response-segment taking place after the last click of the train (last click response, LCR), (2) a modified LCR (mLCR) obtained by clearing the LCR from the amplitude enhancement due to the overlapping of the responses to the clicks preceding the last within the stimulus train. In comparison to MLRs, the most relevant feature of the evoked activity following the last click of the train (LCRs, mLCRs) was the appearance in the 50-110 ms latency range of one (in 11 subjects) or two (in 2 subjects) additional positive-negative deflections having the same periodicity as that of MLR waves. The grand average (GA) of the 40 Hz-SSRs was compared with three predictions synthesized by superimposing: (1) the GA of MLRs, (2) the GA of LCRs, (3) the GA of mLCRs. Both the MLR and mLCR predictions reproduced the recorded signal in amplitude while the LCR prediction amplitude resulted almost twice that of the 40 Hz-SSR. With regard to the phase, the MLR, LCR and mLCR closely predicted the recorded signal. Our findings confirm the effectiveness of the linear addition mechanism in the generation of the 40 Hz-SSR. However the responses to individual stimuli within the 40 Hz-SSR differ from MLRs because of additional periodic activity. These results suggest that phenomena related to the resonant frequency of the activated system may play a role in the mechanisms which interact to generate the 40 Hz-SSR.
Monte Carlo simulation of semiconductor detector response to "2"2"2Rn and "2"2"0Rn environments
International Nuclear Information System (INIS)
Irlinger, J.; Trinkl, S.; Wielunksi, M.; Tschiersch, J.; Rühm, W.
2016-01-01
A new electronic radon/thoron monitor employing semiconductor detectors based on a passive diffusion chamber design has been recently developed at the Helmholtz Zentrum München (HMGU). This device allows for acquisition of alpha particle energy spectra, in order to distinguish alpha particles originating from radon and radon progeny decays, as well as those originating from thoron and its progeny decays. A Monte-Carlo application is described which uses the Geant4 toolkit to simulate these alpha particle spectra. Reasonable agreement between measured and simulated spectra were obtained for both "2"2"0Rn and "2"2"2Rn, in the energy range between 1 and 10 MeV. Measured calibration factors could be reproduced by the simulation, given the uncertainties involved in the measurement and simulation. The simulated alpha particle spectra can now be used to interpret spectra measured in mixed radon/thoron atmospheres. The results agreed well with measurements performed in both radon and thoron gas environments. It is concluded that the developed simulation allows for an accurate prediction of calibration factors and alpha particle energy spectra. - Highlights: • A method was developed to simulate alpha particle spectra from radon/thoron decay. • New monitor features alpha-particle-spectroscopy based on silicon detectors. • A method is presented to quantify radon/thoron concentrations in mixed atmospheres. • The calibration factor can be simulated for various environmental parameters.
International Nuclear Information System (INIS)
Mickael, M.; Verghese, K.; Gardner, R.P.
1989-01-01
The specific purpose neutron lifetime oil well logging simulation code, McPNL, has been rewritten for greater user-friendliness and faster execution. Correlated sampling has been added to the code to enable studies of relative changes in the tool response caused by environmental changes. The absolute responses calculated by the code have been benchmarked against laboratory test pit data. The relative responses from correlated sampling are not directly benchmarked, but they are validated using experimental and theoretical results
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
Energy Technology Data Exchange (ETDEWEB)
Hashimoto, M.; Saito, K.; Ando, H. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
1998-05-01
The method to calculate the response function of spherical BF{sub 3} proportional counter, which is commonly used as neutron dose rate meter and neutron spectrometer with multi moderator system, is developed. As the calculation code for evaluating the response function, the existing code series NRESP, the Monte Carlo code for the calculation of response function of neutron detectors, is selected. However, the application scope of the existing NRESP is restricted, the NRESP98 is tuned as generally applicable code, with expansion of the geometrical condition, the applicable element, etc. The NRESP98 is tested with the response function of the spherical BF{sub 3} proportional counter. Including the effect of the distribution of amplification factor, the detailed evaluation of the charged particle transportation and the effect of the statistical distribution, the result of NRESP98 calculation fit the experience within {+-}10%. (author)
Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward
2013-09-01
Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.
International Nuclear Information System (INIS)
Satoh, Daiki; Sato, Tatsuhiko; Shigyo, Nobuhiro; Ishibashi, Kenji
2006-11-01
The Monte Carlo based computer code SCINFUL-QMD has been developed to evaluate response function and detection efficiency of a liquid organic scintillator for neutrons from 0.1 MeV to 3 GeV. This code is a modified version of SCINFUL that was developed at Oak Ridge National Laboratory in 1988, to provide a calculated full response anticipated for neutron interactions in a scintillator. The upper limit of the applicable energy was extended from 80 MeV to 3 GeV by introducing the quantum molecular dynamics incorporated with the statistical decay model (QMD+SDM) in the high-energy nuclear reaction part. The particles generated in QMD+SDM are neutron, proton, deuteron, triton, 3 He nucleus, alpha particle, and charged pion. Secondary reactions by neutron, proton, and pion inside the scintillator are also taken into account. With the extension of the applicable energy, the database of total cross sections for hydrogen and carbon nuclei were upgraded. This report describes the physical model, computational flow and how to use the code. (author)
Heavy components coupling effect on building response spectra generation
International Nuclear Information System (INIS)
Liu, T.H.; Johnson, E.R.
1985-01-01
This study investigates the dynamic coupling effect on the floor response spectra between the heavy components and the Reactor Interior (R/I) building in a PWR. The following cases were studied: (I) simplified models of one and two lump mass models representing building and heavy components, and (II) actual plant building and heavy component models. Response spectra are developed at building nodes for all models, using time-history analysis methods. Comparisons of response spectra from various models are made to observe the coupling effects. In some cases, this study found that the coupling would reduce the response spectra values in certain frequency regions even if the coupling is not required according to the above criteria. (orig./HP)
Applications of Monte Carlo simulations of gamma-ray spectra
International Nuclear Information System (INIS)
Clark, D.D.
1995-01-01
A short, convenient computer program based on the Monte Carlo method that was developed to generate simulated gamma-ray spectra has been found to have useful applications in research and teaching. In research, we use it to predict spectra in neutron activation analysis (NAA), particularly in prompt gamma-ray NAA (PGNAA). In teaching, it is used to illustrate the dependence of detector response functions on the nature of gamma-ray interactions, the incident gamma-ray energy, and detector geometry
Cevallos Robalino, Lenin E; García Fernández, Gonzalo Felipe; Gallego, Eduardo; Guzmán-García, Karen A; Vega-Carrillo, Hector Rene
2018-02-17
Detection of hidden explosives is of utmost importance for homeland security. Several configurations of an Explosives Detection System (EDS) to intercept hidden threats, made up with a Deuterium-Deuterium (D-D) compact neutron generator and NaI (Tl) scintillation detectors, have been evaluated using MCNP6 code. The system's response to various samples of explosives, such as RDX and Ammonium Nitrate, is analysed. The D-D generator is able to produce fast neutrons with 2.5 MeV energy in a maximum yield of 10 10 n/s. It is surrounded by high-density polyethylene to thermalize the fast neutrons and to optimize interactions with the sample inspected, whose emission of gamma rays gives a characteristic spectrum of the elements that constitute it. This procedure allows to determine its chemical composition and to identify the type of substance. The necessary shielding is evaluated to estimate its thicknesses depending on the admissible dose of operation, using lead and polyethylene. The results show that its functionality is promising in the field of national security for explosives inspection. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stress Generation and Adolescent Depression: Contribution of Interpersonal Stress Responses
Flynn, Megan; Rudolph, Karen D.
2011-01-01
This research examined the proposal that ineffective responses to common interpersonal problems disrupt youths' relationships, which, in turn, contributes to depression during adolescence. Youth (86 girls, 81 boys; M age = 12.41, SD = 1.19) and their primary female caregivers participated in a three-wave longitudinal study. Youth completed a…
Beyond Clickers, Next Generation Classroom Response Systems for Organic Chemistry
Shea, Kevin M.
2016-01-01
Web-based classroom response systems offer a variety of benefits versus traditional clicker technology. They are simple to use for students and faculty and offer various question types suitable for a broad spectrum of chemistry classes. They facilitate active learning pedagogies like peer instruction and successfully engage students in the…
International Nuclear Information System (INIS)
Bhati, Sharda
2009-01-01
To simulate photon transport in the thorax region of the MIRD phantom for a given uniform source distribution of 241 Am in the lungs of the phantom and to compute the pulse height response of a 20 cm dia phoswich detector located right above the lungs on the thorax surface. The total peak counts in the simulated pulse height spectrum of 241 Am can be used to compute the calibration factors of the phoswich for estimation of the lung burdens of 241 Am
Fitness on facebook: advertisements generated in response to profile content.
Villiard, Hope; Moreno, Megan A
2012-10-01
Obesity is a challenging problem affecting almost half of college students. To solve this complex health problem, innovative approaches must be utilized. Over 94 percent of college students maintain a Facebook profile, providing them a venue to publicly disclose current fitness behaviors. Displayed advertisements on Facebook are tailored to profile content and may influence college students' fitness efforts. Facebook may be an innovative venue for improving college students' fitness behaviors. The purpose of this project was to determine (a) how and to what extent college students are discussing fitness on Facebook, and (b) how user-generated fitness information is linked to advertisements for fitness products and advice. First, public Facebook profiles of individual college students were evaluated for displayed fitness references based on 10 fitness behavior categories. Inter-rator reliability between two coders was 91.18 percent. Second, 10 fitness status updates were generated and posted by a researcher on a Facebook profile; the first 40 linked advertisements to these statements were examined. Advertisements were categorized and then examined for relevance to the college population. A total of 57 individual profiles were examined; owners had an average age of 18.3 years (SD=0.51), and 36.8 percent were women. About 71.9 percent of profiles referenced one or more fitness behavior; 97.6 percent referenced exercise, 4.9 percent dieting, and 4.9 percent unhealthy eating. Among the first 40 ads linked to generated status updates, 40.3 percent were fitness related. Most advertisements were for charity runs (30.4 percent), fitness apparel (24.2 percent), or fad diets (9.9 percent). Students referred both healthy and unhealthy fitness behaviors on their Facebook profiles, and these trigger the display of fitness-related advertisements of which few appear applicable. A community- or university-based intervention could be designed and implemented to provide relevant and
Aggregated Demand Modelling Including Distributed Generation, Storage and Demand Response
Marzooghi, Hesamoddin; Hill, David J.; Verbic, Gregor
2014-01-01
It is anticipated that penetration of renewable energy sources (RESs) in power systems will increase further in the next decades mainly due to environmental issues. In the long term of several decades, which we refer to in terms of the future grid (FG), balancing between supply and demand will become dependent on demand actions including demand response (DR) and energy storage. So far, FG feasibility studies have not considered these new demand-side developments for modelling future demand. I...
International Nuclear Information System (INIS)
Duwel, D; Lamba, M; Elson, H; Kumar, N
2015-01-01
Purpose: Various cancers of the eye are successfully treated with radiotherapy utilizing one anterior-posterior (A/P) beam that encompasses the entire content of the orbit. In such cases, a hanging lens shield can be used to spare dose to the radiosensitive lens of the eye to prevent cataracts. Methods: This research focused on Monte Carlo characterization of dose distributions resulting from a single A-P field to the orbit with a hanging shield in place. Monte Carlo codes were developed which calculated dose distributions for various electron radiation energies, hanging lens shield radii, shield heights above the eye, and beam spoiler configurations. Film dosimetry was used to benchmark the coding to ensure it was calculating relative dose accurately. Results: The Monte Carlo dose calculations indicated that lateral and depth dose profiles are insensitive to changes in shield height and electron beam energy. Dose deposition was sensitive to shield radius and beam spoiler composition and height above the eye. Conclusion: The use of a single A/P electron beam to treat cancers of the eye while maintaining adequate lens sparing is feasible. Shield radius should be customized to have the same radius as the patient’s lens. A beam spoiler should be used if it is desired to substantially dose the eye tissues lying posterior to the lens in the shadow of the lens shield. The compromise between lens sparing and dose to diseased tissues surrounding the lens can be modulated by varying the beam spoiler thickness, spoiler material composition, and spoiler height above the eye. The sparing ratio is a metric that can be used to evaluate the compromise between lens sparing and dose to surrounding tissues. The higher the ratio, the more dose received by the tissues immediately posterior to the lens relative to the dose received by the lens
Generating color terrain images in an emergency response system
International Nuclear Information System (INIS)
Belles, R.D.
1985-08-01
The Atmospheric Release Advisory Capability (ARAC) provides real-time assessments of the consequences resulting from an atmospheric release of radioactive material. In support of this operation, a system has been created which integrates numerical models, data acquisition systems, data analysis techniques, and professional staff. Of particular importance is the rapid generation of graphical images of the terrain surface in the vicinity of the accident site. A terrain data base and an associated acquisition system have been developed that provide the required terrain data. This data is then used as input to a collection of graphics programs which create and display realistic color images of the terrain. The graphics system currently has the capability of generating color shaded relief images from both overhead and perspective viewpoints within minutes. These images serve to quickly familiarize ARAC assessors with the terrain near the release location, and thus permit them to make better informed decisions in modeling the behavior of the released material. 7 refs., 8 figs
International Nuclear Information System (INIS)
Kodeli, I.; Aldama, D. L.; De Leege, P. F. A.; Legrady, D.; Hoogenboom, J. E.; Cowan, P.
2004-01-01
As part of the IRTMBA (Improved Radiation Transport Modelling for Borehole Applications) project of the EU community's 5. framework program a special purpose multigroup cross-section library was prepared for use in deterministic and Monte Carlo oil well logging particle transport calculations. This library is expected to improve the prediction of the neutron and gamma spectra at the detector positions of the logging tool, and their use for the interpretation of the neutron logging measurements was studied. Preparation and testing of this library is described. (authors)
Improving nuclear generating station response for electrical grid islanding
International Nuclear Information System (INIS)
Chou, Q.B.; Kundur, P.; Acchione, P.N.; Lautsch, B.
1989-01-01
This paper describes problems associated with the performance characteristics of nuclear generating stations which do not have their overall plant control design functions co-ordinated with the other grid controls. The paper presents some design changes to typical nuclear plant controls which result in a significant improvement in both the performance of the grid island and the chances of the nuclear units staying on-line following the disturbance. This paper focuses on four areas of the overall unit controls and turbine governor controls which could be modified to better co-ordinate the control functions of the nuclear units with the electrical grid. Some simulation results are presented to show the performance of a typical electrical grid island containing a nuclear unit with and without the changes
International Nuclear Information System (INIS)
Pal, Rupali; Sapra, B.K.; Bakshi, A.K.; Datta, D.; Biju, K.; Suryanarayana, S.V.; Nayak, B.K.
2016-01-01
Neutron dosimetry in ion accelerators is a challenging field as the neutron spectrum varies from thermal, to fast and high-energy neutrons usually extending beyond 20 MeV. Solid-state Nuclear Track Detectors (SSNTDs) have been increasingly used in numerous fields related to nuclear physics. Extensive work has also been carried out on determining the response characteristics of such detectors as nuclear spectrometers. In nuclear reaction studies, identification of reaction products according to their type and energy is frequently required. For normally incident particles, energy-dispersive track-diameter methods have become useful scientific tools using CR-39 SSNTD. CR-39 along with 1 mm polyethylene convertor can cover a neutron energy range from 100 keV to 10 MeV. The neutron interacts with the hydrogen in CR-39 producing recoil protons from elastic collisions. This detectable neutron energy range can be increased by modification in the radiator/convertor used along with CR-39. CR39 detectors placed in conjunction with judiciously chosen thicknesses of a polyethylene radiator and a lead absorber (or degrader) are used to increase energy range upto 19 MeV. A portable neutron counter has been proposed for high-energy neutron measurement with 1 cm thick Zirconium (Zr) as the converter outside a spherical HDPE shell of 7 inch diameter. Zr metal has been found to show (n,2n) cross section for energies above 10 MeV starting from 0.01 barns for 8 MeV upto 1 barns for 22 MeV. Above these energies, the experimental data is scarce. In this paper, Zr was used in conjunction with CR-39 which showed an enhancement of track density on the CR-39. This paper demonstrates the enhancement of neutron response using Zr on CR-39 with both theoretical and experimental studies
A Declaration of the Responsibilities of Present Generations toward Past Generations
De Baets, A.H.M.
2004-01-01
Historians study the living and the dead. If we can identify the rights of the living and their responsibilities to the dead, we may be able to formulate a solid ethical infrastructure for historians. A short and generally accepted answer to the question of what the rights of the living are can be
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)
2000-07-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)
International Nuclear Information System (INIS)
Hoogenboom, J.E.
2000-01-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)
Directory of Open Access Journals (Sweden)
J. G. Tundisi
Full Text Available This paper describes and discusses the impacts of the passage of cold fronts on the vertical structure of the Carlos Botelho (Lobo-Broa Reservoir as demonstrated by changes in physical, chemical, and biological variables. The data were obtained with a continuous system measuring 9 variables in vertical profiles in the deepest point of the reservoir (12 m coupled with climatological information and satellite images, during a 32-day period in July and August, 2003. During periods of incidence of cold fronts the reservoir presented vertical mixing. After the dissipation of the cold fronts a period of stability followed with thermal, chemical, and biological (chlorophyll-a stratification. Climatological data obtained during the cold front passage showed lower air temperature, higher wind speed and lower solar radiation. The response of this reservoir can exemplify a generalized process in all shallow reservoirs in the Southeast Brazil and could have several implications for management, particularly in relation to the phytoplankton population dynamics and development of cyanobacterial blooms. Using this as a basis, a predictive model will be developed with the aim of advancing management strategies specially for the drinking water reservoirs of the Metropolitan Region of São Paulo.
International Nuclear Information System (INIS)
Parent, L; Fielding, A L; Dance, D R; Seco, J; Evans, P M
2007-01-01
For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference 2 ) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm 2 were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed
Auditory Brainstem Responses and EMFs Generated by Mobile Phones.
Khullar, Shilpa; Sood, Archana; Sood, Sanjay
2013-12-01
There has been a manifold increase in the number of mobile phone users throughout the world with the current number of users exceeding 2 billion. However this advancement in technology like many others is accompanied by a progressive increase in the frequency and intensity of electromagnetic waves without consideration of the health consequences. The aim of our study was to advance our understanding of the potential adverse effects of GSM mobile phones on auditory brainstem responses (ABRs). 60 subjects were selected for the study and divided into three groups of 20 each based on their usage of mobile phones. Their ABRs were recorded and analysed for latency of waves I-V as well as interpeak latencies I-III, I-V and III-V (in ms). Results revealed no significant difference in the ABR parameters between group A (control group) and group B (subjects using mobile phones for maximum 30 min/day for 5 years). However the latency of waves was significantly prolonged in group C (subjects using mobile phones for 10 years for a maximum of 30 min/day) as compared to the control group. Based on our findings we concluded that long term exposure to mobile phones may affect conduction in the peripheral portion of the auditory pathway. However more research needs to be done to study the long term effects of mobile phones particularly of newer technologies like smart phones and 3G.
Response of borehole extensometers to explosively generated dynamic loads
International Nuclear Information System (INIS)
Patrick, W.C.; Brough, W.G.
1980-01-01
Commercially available, hydraulically anchored, multiple-point borehole extensometers (MPBX) were evaluated with respect to response to dynamic loads produced by explosions. This study is part of the DOE-funded Spent Fuel Test-Climax (SFT-C), currently being conducted in the Climax granitic stock at the Nevada Test Site. The SFT-C is an investigation of the feasibility of short-term storage and retrieval of spent nuclear reactor fuel assemblies at a plausible repository depth in granitic rock. Eleven spent fuel assemblies are stored at a depth of 420 m for three to five years, and will then be retrieved. MPBX units are used in the SFT-C to measure both excavation-induced and thermally induced rock displacements. Long-term reliability of extensometers in this hostile environment is essential in order to obtain valid data during the course of this test. Research to date shows conclusively that extensometers of this type continue to function reliably even though subjected to accelerations of 1.8 g; research also implies that they function well though subjected to accelerations in excess of 100 g. MPBX survivability during the first four months of testing at ambient temperatures was about 90 percent
Next-Generation Metrics: Responsible Metrics & Evaluation for Open Science
Energy Technology Data Exchange (ETDEWEB)
Wilsdon, J.; Bar-Ilan, J.; Peters, I.; Wouters, P.
2016-07-01
Metrics evoke a mixed reaction from the research community. A commitment to using data to inform decisions makes some enthusiastic about the prospect of granular, real-time analysis o of research and its wider impacts. Yet we only have to look at the blunt use of metrics such as journal impact factors, h-indices and grant income targets, to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators often struggle to do justice to the richness and plurality of research. Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers (Lawrence, 2007).” Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to more positive ends has been the focus of several recent and complementary initiatives, including the San Francisco Declaration on Research Assessment (DORA1), the Leiden Manifesto2 and The Metric Tide3 (a UK government review of the role of metrics in research management and assessment). Building on these initiatives, the European Commission, under its new Open Science Policy Platform4, is now looking to develop a framework for responsible metrics for research management and evaluation, which can be incorporated into the successor framework to Horizon 2020. (Author)
Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.
2004-01-01
We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.
Applications of Monte Carlo method in Medical Physics
International Nuclear Information System (INIS)
Diez Rios, A.; Labajos, M.
1989-01-01
The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)
MCNP-REN a Monte Carlo tool for neutron detector design
Abhold, M E
2002-01-01
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel w...
The Performance of a Second Generation Service Discovery Protocol In Response to Message Loss
Sundramoorthy, V.; van de Glind, G.J.; Hartel, Pieter H.; Scholten, Johan
We analyze the behavior of FRODO, a second generation service discovery protocol, in response to message loss in the network. First generation protocols, like UPnP and Jini rely on underlying network layers to enhance their failure recovery. A comparison with UPnP and Jini shows that FRODO performs
Reflections on Generativity and Flourishing: A Response to Snow's Kohlberg Memorial Lecture
Snarey, John
2015-01-01
In his response to Nancy's Snow's "Generativity and Flourishing" (EJ1077701), John Snarey proposes that during the first seasons of one's life one is nurtured by one's parents, but during the latter seasons of life, one is nurtured by one's children. Generative parents interact with their offspring in ways that offer valuable support for…
A time-domain method to generate artificial time history from a given reference response spectrum
Energy Technology Data Exchange (ETDEWEB)
Shin, Gang Sik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Song, Oh Seop [Dept. of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)
2016-06-15
Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance.
A time-domain method to generate artificial time history from a given reference response spectrum
International Nuclear Information System (INIS)
Shin, Gang Sik; Song, Oh Seop
2016-01-01
Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance
Energy Technology Data Exchange (ETDEWEB)
Parent, L [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom); Fielding, A L [School of Physical and Chemical Sciences, Queensland University of Technology, Brisbane (Australia); Dance, D R [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London (United Kingdom); Seco, J [Department of Radiation Oncology, Francis Burr Proton Therapy Center, Massachusetts General Hospital, Harvard Medical School, Boston (United States); Evans, P M [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom)
2007-07-21
For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference <1%, standard deviation <2%). The difference of pixel sensitivity between MC and FF methods was up to 12.2% (clinical setup) and 11.8% (modified setup). MC calibration provided images of open fields (5 x 5 to 20 x 20 cm{sup 2}) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm{sup 2} were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed.
Teng, F.
2018-01-01
In this dissertation I have explored the moral justification for intergenerational responsibilities in the context of climate change. It looks for reasons, from different moral traditions, which may explain why we should accept the idea of moral responsibilities to future generations and what do we
Elements of Monte Carlo techniques
International Nuclear Information System (INIS)
Nagarajan, P.S.
2000-01-01
The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc
Energy Technology Data Exchange (ETDEWEB)
Oliveira, F.G.; Andrade, A.F.G. de; Vieira, J.W., E-mail: baby.oliveira@hotmail.com.br, E-mail: arthurfelandrade@gmail.com, E-mail: jose.wilson59@uol.com.br [Instituto Federal de Pernambuco (IFPE), Recife, PE (Brazil); Oliveira, A.C.H. de, E-mail: oliveira_ach@yahoo.com [Universidade Federal Rural de Pernambuco (UFRPE), Recife, PE (Brazil); Lima, F.R.A., E-mail: falima@cnen.gov.br [Centro Regional de Ciências Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife-PE (Brazil)
2017-07-01
One of the greatest challenges of numerical dosimetry is to estimate the dose of ionizing radiation absorbed by the soft tissues that are located in bone trabecular. Due to the difficulty in obtaining micro-CT images of real bone samples (OR), the need for the generation of synthetic bone trabecular appeared. In this work, virtual synthetic trabecular samples (BU), generated by Monte Carlo methods parameterized by the Burr XII probability density function (FDP), and their OR equivalents were submitted to dosimetric evaluations in the adult male Computational Exposure Model (MCE) in orthostatic position (MSTA) coupled to the EGSnrc software with idealized photon-emitting sources and targeting the two most radiosensitive bone tissues: red bone marrow and the foramen-bone surface of trabecular bones, sternum, spine, femur, pelvis and skull regions. When comparing the dosimetric results of the two sample sets, it was found that the overall relative error presented was 4.34%. It is concluded that the synthetic trabecular generated by FDPs with the same characteristics as the Burr XII FDP can successfully replace the OR bones in similar bone dosimetry tests.
Is Monte Carlo embarrassingly parallel?
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)
2012-07-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Serôdio, João; Ezequiel, João; Frommlet, Jörg; Laviale, Martin; Lavaud, Johann
2013-11-01
Light-response curves (LCs) of chlorophyll fluorescence are widely used in plant physiology. Most commonly, LCs are generated sequentially, exposing the same sample to a sequence of distinct actinic light intensities. These measurements are not independent, as the response to each new light level is affected by the light exposure history experienced during previous steps of the LC, an issue particularly relevant in the case of the popular rapid light curves. In this work, we demonstrate the proof of concept of a new method for the rapid generation of LCs from nonsequential, temporally independent fluorescence measurements. The method is based on the combined use of sample illumination with digitally controlled, spatially separated beams of actinic light and a fluorescence imaging system. It allows the generation of a whole LC, including a large number of actinic light steps and adequate replication, within the time required for a single measurement (and therefore named "single-pulse light curve"). This method is illustrated for the generation of LCs of photosystem II quantum yield, relative electron transport rate, and nonphotochemical quenching on intact plant leaves exhibiting distinct light responses. This approach makes it also possible to easily characterize the integrated dynamic light response of a sample by combining the measurement of LCs (actinic light intensity is varied while measuring time is fixed) with induction/relaxation kinetics (actinic light intensity is fixed and the response is followed over time), describing both how the response to light varies with time and how the response kinetics varies with light intensity.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Energy Technology Data Exchange (ETDEWEB)
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS
Generation of response functions of a NaI detector by using an interpolation technique
International Nuclear Information System (INIS)
Tominaga, Shoji
1983-01-01
A computer method is developed for generating response functions of a NaI detector to monoenergetic γ-rays. The method is based on an interpolation between measured response curves by a detector. The computer programs are constructed for Heath's response spectral library. The principle of the basic mathematics used for interpolation, which was reported previously by the author, et al., is that response curves can be decomposed into a linear combination of intrinsic-component patterns, and thereby the interpolation of curves is reduced to a simple interpolation of weighting coefficients needed to combine the component patterns. This technique has some advantages of data compression, reduction in computation time, and stability of the solution, in comparison with the usual functional fitting method. The processing method of segmentation of a spectrum is devised to generate useful and precise response curves. A spectral curve, obtained for each γ-ray source, is divided into some regions defined by the physical processes, such as the photopeak area, the Compton continuum area, the backscatter peak area, and so on. Each segment curve then is processed separately for interpolation. Lastly the estimated curves to the respective areas are connected on one channel scale. The generation programs are explained briefly. It is shown that the generated curve represents the overall shape of a response spectrum including not only its photopeak but also the corresponding Compton area, with a sufficient accuracy. (author)
Continuous energy adjoint Monte Carlo for coupled neutron-photon transport
Energy Technology Data Exchange (ETDEWEB)
Hoogenboom, J.E. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.
2001-07-01
Although the theory for adjoint Monte Carlo calculations with continuous energy treatment for neutrons as well as for photons is known, coupled neutron-photon transport problems present fundamental difficulties because of the discrete energies of the photons produced by neutron reactions. This problem was solved by forcing the energy of the adjoint photon to the required discrete value by an adjoint Compton scattering reaction or an adjoint pair production reaction. A mathematical derivation shows the exact procedures to follow for the generation of an adjoint neutron and its statistical weight. A numerical example demonstrates that correct detector responses are obtained compared to a standard forward Monte Carlo calculation. (orig.)
Importance estimation in Monte Carlo modelling of neutron and photon transport
International Nuclear Information System (INIS)
Mickael, M.W.
1992-01-01
The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)
Monte Carlo - Advances and Challenges
International Nuclear Information System (INIS)
Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.
2008-01-01
Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-01-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation
Importance iteration in MORSE Monte Carlo calculations
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1994-02-01
An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)
Evaluation of methods used for the direct generation of response spectra
International Nuclear Information System (INIS)
Mayers, R.L.; Muraki, T.; Jones, L.R.; Donikian, R.
1983-01-01
The paper presents an alternate methodology by which seismic in-structure response spectra may be generated directly from either ground or floor excitation spectra. The method is based upon stochastic concepts and utilizes the modal superposition solution. The philosophy of the method is based upon the notion that the evaluation of 'peak' response in uncertain excitation environments is only meaningful in a probabilistic sense. This interpretation of response spectra facilitates the generation of in-structure spectra for any non-exceedance probability (NEP). The method is validated by comparisons with a set of deterministic time-history analyses with three example models: an eleven-story building model, a containment structure stick model, and a floor mounted control panel, subjected to ten input spectrum compatible acceleration time-histories. A significant finding resulting from these examples is that the time-history method portrayed substantial variation in the resulting in-structure spectra, and therefore is unreliable for the generation of spectra. It is shown that the average of the time-history generated spectra can be estimated by the direct generation procedure, and reliable spectra may be generated for 85 NEP levels. The methodology presented herein is shown to be valid for both primary and secondary systems. Also included in the paper, is a review of the stochastic methods proposed by Singh and Der Kiureghian et. al., and the Fourier transform method proposed by Scanlan et al. (orig./HP)
Model-generated air quality statistics for application in vegetation response models in Alberta
International Nuclear Information System (INIS)
McVehil, G.E.; Nosal, M.
1990-01-01
To test and apply vegetation response models in Alberta, air pollution statistics representative of various parts of the Province are required. At this time, air quality monitoring data of the requisite accuracy and time resolution are not available for most parts of Alberta. Therefore, there exists a need to develop appropriate air quality statistics. The objectives of the work reported here were to determine the applicability of model generated air quality statistics and to develop by modelling, realistic and representative time series of hourly SO 2 concentrations that could be used to generate the statistics demanded by vegetation response models
DEFF Research Database (Denmark)
Faria, Pedro; Soares, Tiago; Vale, Zita
2014-01-01
Recent changes in the operation and planning of power systems have been motivated by the introduction of Distributed Generation (DG) and Demand Response (DR) in the competitive electricity markets’ environment, with deep concerns at the efficiency level. In this context, grid operators, market...... proposes a methodology which considers the joint dispatch of demand response and distributed generation in the context of a distribution network operated by a virtual power player. The resources’ participation can be performed in both energy and reserve contexts. This methodology contemplates...
Development of 70 MW class superconducting generator with quick-response excitation
Miyaike, Kiyoshi; Kitajima, Toshio; Ito, Tetsuo
2002-03-01
The development of a superconducting generator had been carried out for 12 years under the first stage of a Super GM project. The 70 MW class model machine with quick response excitation was manufactured and evaluated in the project. This type of superconducting generator improves power system stability against rapid load fluctuations at the power system faults. This model machine achieved all development targets including high stability during rapid excitation control. It was also connected to the actual 77 kV electrical power grid as a synchronous condenser and proved advantages and high-operation reliability of the superconducting generator.
Global Monte Carlo Simulation with High Order Polynomial Expansions
International Nuclear Information System (INIS)
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-01-01
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Demand response impacts on off-grid hybrid photovoltaic-diesel generator microgrids
Directory of Open Access Journals (Sweden)
Aaron St. Leger
2015-08-01
Full Text Available Hybrid microgrids consisting of diesel generator set(s and converter based power sources, such as solar photovoltaic or wind sources, offer an alternative to generator based off-grid power systems. The hybrid approach has been shown to be economical in many off-grid applications and can result in reduced generator operation, fuel requirements, and maintenance. However, the intermittent nature of demand and renewable energy sources typically require energy storage, such as batteries, to properly operate the hybrid microgrid. These batteries increase the system cost, require proper operation and maintenance, and have been shown to be unreliable in case studies on hybrid microgrids. This work examines the impacts of leveraging demand response in a hybrid microgrid in lieu of energy storage. The study is performed by simulating two different hybrid diesel generator—PV microgrid topologies, one with a single diesel generator and one with multiple paralleled diesel generators, for a small residential neighborhood with varying levels of demand response. Various system designs are considered and the optimal design, based on cost of energy, is presented for each level of demand response. The solar resources, performance of solar PV source, performance of diesel generators, costs of system components, maintenance, and operation are modeled and simulated at a time interval of ten minutes over a twenty-five year period for both microgrid topologies. Results are quantified through cost of energy, diesel fuel requirements, and utilization of the energy sources under varying levels of demand response. The results indicate that a moderate level of demand response can have significant positive impacts to the operation of hybrid microgrids through reduced energy cost, fuel consumption, and increased utilization of PV sources.
Co-Planning of Demand Response and Distributed Generators in an Active Distribution Network
Directory of Open Access Journals (Sweden)
Yi Yu
2018-02-01
Full Text Available The integration of renewables is fast-growing, in light of smart grid technology development. As a result, the uncertain nature of renewables and load demand poses significant technical challenges to distribution network (DN daily operation. To alleviate such issues, price-sensitive demand response and distributed generators can be coordinated to accommodate the renewable energy. However, the investment cost for demand response facilities, i.e., load control switch and advanced metering infrastructure, cannot be ignored, especially when the responsive demand is large. In this paper, an optimal coordinated investment for distributed generator and demand response facilities is proposed, based on a linearized, price-elastic demand response model. To hedge against the uncertainties of renewables and load demand, a two-stage robust investment scheme is proposed, where the investment decisions are optimized in the first stage, and the demand response participation with the coordination of distributed generators is adjusted in the second stage. Simulations on the modified IEEE 33-node and 123-node DN demonstrate the effectiveness of the proposed model.
Mechanisms Underlying the Immune Response Generated by an Oral Vibrio cholerae Vaccine
Directory of Open Access Journals (Sweden)
Danylo Sirskyj
2016-07-01
Full Text Available Mechanistic details underlying the resulting protective immune response generated by mucosal vaccines remain largely unknown. We investigated the involvement of Toll-like receptor signaling in the induction of humoral immune responses following oral immunization with Dukoral, comparing wild type mice with TLR-2-, TLR-4-, MyD88- and Trif-deficient mice. Although all groups generated similar levels of IgG antibodies, the proliferation of CD4+ T-cells in response to V. cholerae was shown to be mediated via MyD88/TLR signaling, and independently of Trif signaling. The results demonstrate differential requirements for generation of immune responses. These results also suggest that TLR pathways may be modulators of the quality of immune response elicited by the Dukoral vaccine. Determining the critical signaling pathways involved in the induction of immune response to this vaccine would be beneficial, and could contribute to more precisely-designed versions of other oral vaccines in the future.
Generation of artificial time-histories, rich in all frequencies, from given response spectra
International Nuclear Information System (INIS)
Levy, S.; Wilkinson, J.P.D.
1976-01-01
In the design of nuclear power plants, it has been found desirable in certain instances to use the time-history method of dynamic analysis to determine the plant response to seismic input. In the implementation of this method, it is necessary to determine an adequate representation of the excitation as a function of time. Because many design criteria are specified in terms of design response spectra one is faced with the problem of generating a time-history whose own response spectrum approximates as far as possible to the originally specified design response spectrum. One objective of this paper is to present a method of synthesizing such time-histories from a given design response spectrum. The design response spectra may be descriptive of floor responses at a particular location in a plant, or they may be descriptive of seismic ground motions at a plant site. The method described in this paper allows the generation of time histories that are rich in all frequencies in the spectrum. This richness is achieved by choosing a large number of closely-spaced frequency points such that the half-power points of adjacent frequencies overlap. Examples are given concerning seismic design response spectra, and a number of points are discussed concerning the effect of frequency spacing on convergence. (Auth.)
Monte Carlo shielding analyses using an automated biasing procedure
International Nuclear Information System (INIS)
Tang, J.S.; Hoffman, T.J.
1988-01-01
A systematic and automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete ordinates calculation are used to generate biasing parameters for a Monte Carlo calculation. The entire procedure of adjoint calculation, biasing parameters generation, and Monte Carlo calculation has been automated. The automated biasing procedure has been applied to several realistic deep-penetration shipping cask problems. The results obtained for neutron and gamma-ray transport indicate that with the automated biasing procedure Monte Carlo shielding calculations of spent-fuel casks can be easily performed with minimum effort and that accurate results can be obtained at reasonable computing cost
International Nuclear Information System (INIS)
Katrasnik, Tomaz; Medica, Vladimir; Trenc, Ferdinand
2005-01-01
Reliability of electric supply systems is among the most required necessities of modern society. Turbocharged diesel engine driven alternating current generating sets are often used to prevent electric black outs and/or as prime electric energy suppliers. It is well known that turbocharged diesel engines suffer from an inadequate response to a sudden load increase, this being a consequence of the nature of the energy exchange between the engine and the turbocharger. The dynamic response of turbocharged diesel engines could be improved by electric assisting systems, either by direct energy supply with an integrated starter-generator-booster (ISG) mounted on the engine flywheel, or by an indirect energy supply with an electrically assisted turbocharger. An experimentally verified zero dimensional computer simulation method was used for the analysis of both types of electrical assistance. The paper offers an analysis of the interaction between a turbocharged diesel engine and different electric assisting systems, as well as the requirements for the supporting electric motors that could improve the dynamic response of a diesel engine while driving an AC generating set. When performance class compliance is a concern, it is evident that an integrated starter-generator-booster outperforms an electrically assisted turbocharger for the investigated generating set. However, the electric energy consumption and frequency recovery times are smaller when an electrically assisted turbocharger is applied
Monte Carlo simulations for plasma physics
International Nuclear Information System (INIS)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Matzat, U.; Snijders, C.C.P.
2012-01-01
How do online shops re-build trust on consumer-generated review sites after customers accuse them of misbehaving? Theories suggest that the effectiveness of responses depends on the type of accusation, yet online research indicates that apologies are superior to denials regardless of the type of
International Nuclear Information System (INIS)
Vial, E.
2004-01-01
Recognition of the concept of responsibility to future generations seems, to imply the need to assume responsibility today for radioactive waste legacy of the past as well as for the waste that is currently being generated. However, this view of things, or more precisely this interpretation, is clouded by the lack of a clear definition of the concept of responsibility towards future generations. The concept has been used mainly in connection with long-lived radioactive wastes, which pose the greatest management problem as it so so far exceeds any human scale of reference. Consideration for future generations has to be a factor in the management of all types of radioactive waste, be it short, medium or long-lived waste or very low, low, intermediate or highly radioactive waste. As a general rule the concept of responsibility has made focus on long lived waste, whatever its level of radioactivity. The current alternatives for the management of radioactive waste may be: interim storage, final disposal, incineration, transmutation, to lower the radioactivity of the wastes. These different alternatives are discussed because they are not all genuine solutions and need to be deepened. (N.C.)
The Performance of a Second Generation Service Discovery Protocol In Response to Message Loss
Sundramoorthy, V.; van de Glind, G.J.; Hartel, Pieter H.; Scholten, Johan
We analyze the behavior of FRODO, a second generation service discovery protocol, in response to message loss in the network. Earlier protocols, like UPnP and Jini rely on underlying network layers to enhance their failure recovery. A comparison with UPnP and Jini shows that FRODO performs more
International Nuclear Information System (INIS)
Handrlica, Jakub; Novotna, Marianna
2014-01-01
The issue of accession of the Eastern European Member States to the 1997 Protocol is discussed with focus on the European Union's authority and enforcement powers. Following up the article published in the preceding issue of this journal, the present contribution analyses the relations of the '2nd generation' responsibility conventions to the law of the European Union. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Sebastiao E.M. de; Padua Guarini, Antonio de [Centro de Pesquisas de Energia Eletrica (CEPEL), Rio de Janeiro, RJ (Brazil); Souza, Joao A. de; Valgas, Helio M; Pinto, Roberto del Giudice R. [Companhia Energetica de Minas Gerais (CEMIG), Belo Horizonte, MG (Brazil)
1994-12-31
This work describes the results of the set frequency response tests performed in the generator number 2, 6.9 kV, 25 MVA, of Camargos hydroelectric power plant, CEMIG, and the parameters relatives to determined structures of model. This tests are unpublished in Brazil. (author) 7 refs., 16 figs., 7 tabs.
Herwig: The Evolution of a Monte Carlo Simulation
CERN. Geneva
2015-01-01
Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.
A midway forward-adjoint coupling method for neutron and photon Monte Carlo transport
International Nuclear Information System (INIS)
Serov, I.V.; John, T.M.; Hoogenboom, J.E.
1999-01-01
The midway Monte Carlo method for calculating detector responses combines a forward and an adjoint Monte Carlo calculation. In both calculations, particle scores are registered at a surface to be chosen by the user somewhere between the source and detector domains. The theory of the midway response determination is developed within the framework of transport theory for external sources and for criticality theory. The theory is also developed for photons, which are generated at inelastic scattering or capture of neutrons. In either the forward or the adjoint calculation a so-called black absorber technique can be applied; i.e., particles need not be followed after passing the midway surface. The midway Monte Carlo method is implemented in the general-purpose MCNP Monte Carlo code. The midway Monte Carlo method is demonstrated to be very efficient in problems with deep penetration, small source and detector domains, and complicated streaming paths. All the problems considered pose difficult variance reduction challenges. Calculations were performed using existing variance reduction methods of normal MCNP runs and using the midway method. The performed comparative analyses show that the midway method appears to be much more efficient than the standard techniques in an overwhelming majority of cases and can be recommended for use in many difficult variance reduction problems of neutral particle transport
Generation of floor response spectra for a model structure of nuclear power plant
International Nuclear Information System (INIS)
Vaidyanathan, C.V.; Kamatchi, P.; Ravichandran, R.; Lakshmanan, N.
2003-01-01
The importance of Nuclear power plants and the consequences of a nuclear accident require that the nuclear structures be designed for the most severe environmental conditions. Earthquakes constitutes major design consideration for the system, structures and equipment of a nuclear power plant. The design of structures on ground is based on the ground response spectra. Many important parts of a nuclear power plant facility are attached to the principal parts of the structure and respond in a manner determined by the structural response rather than by the general ground motion to which the structure is supported. Hence the seismic response of equipment is generally based on the response spectrum of the floor on which it is mounted. In this paper such floor response spectra have been generated at different nodes of a chosen model structure of a nuclear power plant. In the present study a detailed nonlinear time history analysis has been carried out on the mathematical model of the chosen Nuclear Power Plant model structure with the spectrum compatible time history. The acceleration response results of the time history analysis has been used in the spectral analysis and the response spectra are generated. Further peak broadening has been done to account for uncertainties in the material properties and soil characteristics. (author)
Generation of synthetic time histories compatible with multiple-damping design response spectra
International Nuclear Information System (INIS)
Lilhanand, K.; Tseng, W.S.
1987-01-01
Seismic design of nuclear power plants as currently practiced requires time history analyses be performed to generate floor response spectra for seismic qualification of piping, equipment, and components. Since design response spectra are normally prescribed in the form of smooth spectra, the generation of synthetic time histories whose response spectra closely match the ''target'' design spectra of multiple damping values, is often required for the seismic time history analysis purpose. Various methods of generation of synthetic time histories compatible with target response spectra have been proposed in the literature. Since the mathematical problem of determining a time history from a given set of response spectral values is not unique, an exact solution is not possible, and all the proposed methods resort to some forms of approximate solutions. In this paper, a new iteration scheme, is described which effectively removes the difficulties encountered by the existing methods. This new iteration scheme can not only improve the accuracy of spectrum matching for a single-damping target spectrum, but also automate the spectrum matching for multiple-damping target spectra. The applicability and limitations as well as the method adopted to improve the numerical stability of this new iteration scheme are presented. The effectiveness of this new iteration scheme is illustrated by two example applications
Demand Response Programs Design and Use Considering Intensive Penetration of Distributed Generation
Directory of Open Access Journals (Sweden)
Pedro Faria
2015-06-01
Full Text Available Further improvements in demand response programs implementation are needed in order to take full advantage of this resource, namely for the participation in energy and reserve market products, requiring adequate aggregation and remuneration of small size resources. The present paper focuses on SPIDER, a demand response simulation that has been improved in order to simulate demand response, including realistic power system simulation. For illustration of the simulator’s capabilities, the present paper is proposes a methodology focusing on the aggregation of consumers and generators, providing adequate tolls for the demand response program’s adoption by evolved players. The methodology proposed in the present paper focuses on a Virtual Power Player that manages and aggregates the available demand response and distributed generation resources in order to satisfy the required electrical energy demand and reserve. The aggregation of resources is addressed by the use of clustering algorithms, and operation costs for the VPP are minimized. The presented case study is based on a set of 32 consumers and 66 distributed generation units, running on 180 distinct operation scenarios.
Ethics Beyond Finitude: Responsibility towards Future Generations and Nuclear Waste Management
International Nuclear Information System (INIS)
Loefquist, Lars
2008-01-01
This dissertation has three aims: 1. To evaluate several ethical theories about responsibility towards future generations. 2. To construct a theory about responsibility towards future generations. 3. To carry out an ethical evaluation of different nuclear waste management methods. Five theories are evaluated with the help of evaluative criteria, primarily: A theory must provide future generations with some independent moral status. A theory should acknowledge moral pluralism. A theory should provide some normative claims about real-world problems. Derek Parfit's theory provides future generations with full moral status. But it is incompatible with moral pluralism, and does not provide reasonable normative claims about real-world problems. Brian Barry's theory provides such claims and a useful idea about risk management, but it does not provide an argument why future generations ought to exist. Avner de-Shalit's theory explains why they ought to exist; however, his theory can not easily explain why we ought to care for other people than those in our own community. Emmanuel Agius' theory gives an ontological explanation for mankind's unity, but reduces conflicts of interests to a common good. Finally, Hans Jonas' theory shifts the focus from the situation of future generations to the preconditions of human life generally. However, his theory presupposes a specific ontology, which might be unable to motivate people to act. The concluding chapters describe a narrative theory of responsibility. It claims that we should comprehend ourselves as parts of the common story of mankind and that we ought to provide future generations with equal opportunities. This implies that we should avoid transferring risks and focus on reducing the long-term risks associated with the nuclear waste
Ethics Beyond Finitude: Responsibility towards Future Generations and Nuclear Waste Management
Energy Technology Data Exchange (ETDEWEB)
Loefquist, Lars
2008-05-15
This dissertation has three aims: 1. To evaluate several ethical theories about responsibility towards future generations. 2. To construct a theory about responsibility towards future generations. 3. To carry out an ethical evaluation of different nuclear waste management methods. Five theories are evaluated with the help of evaluative criteria, primarily: A theory must provide future generations with some independent moral status. A theory should acknowledge moral pluralism. A theory should provide some normative claims about real-world problems. Derek Parfit's theory provides future generations with full moral status. But it is incompatible with moral pluralism, and does not provide reasonable normative claims about real-world problems. Brian Barry's theory provides such claims and a useful idea about risk management, but it does not provide an argument why future generations ought to exist. Avner de-Shalit's theory explains why they ought to exist; however, his theory can not easily explain why we ought to care for other people than those in our own community. Emmanuel Agius' theory gives an ontological explanation for mankind's unity, but reduces conflicts of interests to a common good. Finally, Hans Jonas' theory shifts the focus from the situation of future generations to the preconditions of human life generally. However, his theory presupposes a specific ontology, which might be unable to motivate people to act. The concluding chapters describe a narrative theory of responsibility. It claims that we should comprehend ourselves as parts of the common story of mankind and that we ought to provide future generations with equal opportunities. This implies that we should avoid transferring risks and focus on reducing the long-term risks associated with the nuclear waste
Directory of Open Access Journals (Sweden)
K. V. Dobrego
2017-01-01
Full Text Available Nowadays we observe rather rapid growth of energy accumulators market. There are prerequisites to their extensive application in Belarus. In spite of technology development problems pertaining to optimization of electric power and their operation under conditions of specific systems “generator – accumulator – consumer” (GAC have not obtained proper consideration. At the same time tuning and optimization of the GAC system may provide competitive advantages to various accumulating systems because application of accumulator batteries in non-optimal charge – discharge conditions reduces its operating resource. Optimization of the GAC system may include utilization of hybrid accumulator systems together with heterogeneous chemical and mechanical accumulators, tuning of system controller parameters etc. Research papers present a great number of empirical and analytical methods for calculation of electric loads. These methods use the following parameters as initial data: time-averaged values of actual electric power consumption, averaged apartment electric loads, empirical and statistical form coefficients, coefficients of maximum electric load for a group of uniform consumers. However such models do not meet the requirements of detailed simulation of relatively small system operation when the simulation must correspond to non-stationary, non-averaged, stochastic load nature. The paper provides a simple approach to the detailed simulation of electric loads in regard to small projects such as multi-unit apartment building or small agricultural farm. The model is formulated both in physical and algorithmic terms that make it possible to be easily realized in any programming environment. The paper presents convergence of integral electric power consumption, which is set by the model, to statistically averaged parameters. Autocorrelation function has been calculated in the paper that shows two scales for autocorrelation of simulated load diagrams
Directory of Open Access Journals (Sweden)
Mubbashir Ali
2018-05-01
Full Text Available From an environment perspective, the increased penetration of wind and solar generation in power systems is remarkable. However, as the intermittent renewable generation briskly grows, electrical grids are experiencing significant discrepancies between supply and demand as a result of limited system flexibility. This paper investigates the optimal sizing and control of the hydrogen energy storage system for increased utilization of renewable generation. Using a Finnish case study, a mathematical model is presented to investigate the optimal storage capacity in a renewable power system. In addition, the impact of demand response for domestic storage space heating in terms of the optimal sizing of energy storage is discussed. Finally, sensitivity analyses are conducted to observe the impact of a small share of controllable baseload production as well as the oversizing of renewable generation in terms of required hydrogen storage size.
The generation of shared cryptographic keys through channel impulse response estimation at 60 GHz.
Energy Technology Data Exchange (ETDEWEB)
Young, Derek P.; Forman, Michael A.; Dowdle, Donald Ryan
2010-09-01
Methods to generate private keys based on wireless channel characteristics have been proposed as an alternative to standard key-management schemes. In this work, we discuss past work in the field and offer a generalized scheme for the generation of private keys using uncorrelated channels in multiple domains. Proposed cognitive enhancements measure channel characteristics, to dynamically change transmission and reception parameters as well as estimate private key randomness and expiration times. Finally, results are presented on the implementation of a system for the generation of private keys for cryptographic communications using channel impulse-response estimation at 60 GHz. The testbed is composed of commercial millimeter-wave VubIQ transceivers, laboratory equipment, and software implemented in MATLAB. Novel cognitive enhancements are demonstrated, using channel estimation to dynamically change system parameters and estimate cryptographic key strength. We show for a complex channel that secret key generation can be accomplished on the order of 100 kb/s.
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
Abat, E; Addy, T N; Adragna, P; Aharrouche, M; Ahmad, A; Akesson, T P A; Aleksa, M; Alexa, C; Anderson, K; Andreazza, A; Anghinolfi, F; Antonaki, A; Arabidze, G; Arik, E; Atkinson, T; Baines, J; Baker, O K; Banfi, D; Baron, S; Barr, A J; Beccherle, R; Beck, H P; Belhorma, B; Bell, P J; Benchekroun, D; Benjamin, D P; Benslama, K; Bergeaas Kuutmann, E; Bernabeu, J; Bertelsen, H; Binet, S; Biscarat, C; Boldea, V; Bondarenko, V G; Boonekamp, M; Bosman, M; Bourdarios, C; Broklova, Z; Burckhart Chromek, D; Bychkov, V; Callahan, J; Calvet, D; Canneri, M; Capeans Garrido, M; Caprini, M; Cardiel Sas, L; Carli, T; Carminati, L; Carvalho, J; Cascella, M; Castillo, M V; Catinaccio, A; Cauz, D; Cavalli, D; Cavalli Sforza, M; Cavasinni, V; Cetin, S A; Chen, H; Cherkaoui, R; Chevalier, L; Chevallier, F; Chouridou, S; Ciobotaru, M; Citterio, M; Clark, A; Cleland, B; Cobal, M; Cogneras, E; Conde Muino, P; Consonni, M; Constantinescu, S; Cornelissen, T; Correard, S; Corso Radu, A; Costa, G; Costa, M J; Costanzo, D; Cuneo, S; Cwetanski, P; Da Silva, D; Dam, M; Dameri, M; Danielsson, H O; Dannheim, D; Darbo, G; Davidek, T; De, K; Defay, P O; Dekhissi, B; Del Peso, J; Del Prete, T; Delmastro, M; Derue, F; Di Ciaccio, L; Dita, S; Dittus, F; Djama, F; Djobava, T; Dobos, D; Dobson, M; Dolgoshein, B A; Dotti, A; Drake, G; Drasal, Z; Dressnandt, N; Driouchi, G; Drohan, J; Ebenstein, W L; Eerola, P; Eerola, P; Efthymiopoulos, I; Egorov, K; Eifert, T F; Einsweiler, K; El Kacimi, M; Elsing, M; Emelyanov, D; Escobar, C; Etienvre, A I; Fabich, A; Facius, K; Fakhr-Edine, A I; Fanti, M; Farbin, A; Farthouat, P; Fassouliotis, D; Fayard, L; Febbraro, R; Fedin, O L; Fenyuk, A; Fergusson, D; Ferrari, P; Ferrari, R; Ferreira, B C; Ferrer, A; Ferrere, D; Filippini, G; Flick, T; Fournier, D; Francavilla, P; Francis, D; Froeschl, R; Froidevaux, D; Fullana, E; Gadomski, S; Gagliardi, G; Gagnon, P; Gallas, M; Gallop, B J; Gameiro, S; Gan, K K; Garcia, R; Garcia, C; Gavrilenko, I L; Gemme, C; Gerlach, P; Ghodbane, N; Giakoumopoulou, V; Giangiobbe, V; Giokaris, N; Di Girolamo, B; Glonti, G; Goettfert, T; Golling, T; Gollub, N; Gomes, A; Gomez, M D; Gonzalez-Sevilla, S; Goodrick, M J; Gorfine, G; Gorini, B; Goujdami, D; Grahn, K J; Grenier, P; Grigalashvili, N; Grishkevich, Y; Grosse-Knetter, J; Gruwe, M; Guicheney, C; Gupta, A; Haeberli, C; Haertel, R; Hajduk, Z; Hakobyan, H; Hance, M; Hansen, D J; Hansen, P H; Hara, K; Harvey Jr, A; Hawkings, R J; Heinemann, F E W; Henriques Correia, A; Henss, T; Hervas, L; Higon, E; Hill, J C; Hoffman, J; Hostachy, J Y; Hruska, I; Hubaut, F; Huegging, F; Hulsbergen, W; Hurwitz, M; Iconomidou-Fayard, L; Jansen, E; Jen-La Plante, I; Johansson, P D C; Jon-And, K; Joos, M; Jorgensen, S; Joseph, J; Kaczmarska, A; Kado, M; Karyukhin, A; Kataoka, M; Kayumov, F; Kazarov, A; Keener, P T; Kekelidze, G D; Kerschen, N; Kersten, S; Khomich, A; Khoriauli, G; Khramov, E; Khristachev, A; Khubua, J; Kittelmann, T H; Klingenberg, R; Klinkby, E B; Kodys, P; Koffas, T; Kolos, S; Konovalov, S P; Konstantinidis, N; Kopikov, S; Korolkov, I; Kostyukhin, V; Kovalenko, S; Kowalski, T Z; Kruger, K; Kramarenko, V; Kudin, L G; Kulchitsky, Y; Le Bihan, A C; Lacasta, C; Lafaye, R; Laforge, B; Lampl, W; Lanni, F; Laplace, S; Lari, T; Latorre, S; Le Bihan, A C; Lechowski, M; Ledroit-Guillon, F; Lehmann, G; Leitner, R; Lelas, D; Lester, C G; Liang, Z; Lichard, P; Liebig, W; Lipniacka, A; Lokajicek, M; Louchard, L; Lourerio, K F; Lucotte, A; Luehring, F; Lund-Jensen, B; Lundberg, B; Ma, H; Mackeprang, R; Maio, A; Maleev, V P; Malek, F; Mandelli, L; Maneira, J; Mangin-Brinet, M; Manousakis, A; Mapelli, L; Marques, C; Marti i García, S; Martin, F; Mathes, M; Mazzanti, M; McFarlane, K W; McPherson, R; Mchedlidze, G; Mehlhase, S; Meirosu, C; Meng, Z; Meroni, C; Miagkov, A; Mialkovski, V; Mikulec, B; Milstead, D; Minashvili, I; Mindur, B; Mitsou, V A; Moed, S; Monnier, E; Moorhead, G; Morettini, P; Morozov, S V; Mosidze, M; Mouraviev, S V; Moyse, E W J; Munar, A; Nadtochi, A V; Nakamura, K; Nechaeva, P; Negri, A; Nemecek, S; Nessi, M; Nesterov, S Y; Newcomer, F M; Nikitine, I; Nikolaev, K; Nikolic-Audit, I; Ogren, H; Oh, S H; Oleshko, S B; Olszowska, J; Onofre, A; Padilla Aranda, C; Paganis, S; Pallin, D; Pantea, D; Paolone, V; Parodi, F; Parsons, J; Parzhitskiy, S; Pasqualucci, E; Passmore, M S; Pater, J; Patrichev, S; Peez, M; Perez Reale, V; Perini, L; Peshekhonov, V D; Petersen, J; Petersen, T C; Petti, R; Phillips, P W; Pilcher, J; Pina, J; Pinto, B; Podlyski, F; Poggioli, L; Poppleton, A; Poveda, J; Pralavorio, P; Pribyl, L; Price, M J; Prieur, D; Puigdengoles, C; Puzo, P; Rohne, O; Ragusa, F; Rajagopalan, S; Reeves, K; Reisinger, I; Rembser, C; Bruckman de Renstrom, P; Reznicek, P; Ridel, M; Risso, P; Riu, I; Robinson, D; Roda, C; Roe, S; Romaniouk, A; Rousseau, D; Rozanov, A; Ruiz, A; Rusakovich, N; Rust, D; Ryabov, Y F; Ryjov, V; Salto, O; Salvachua, B; Salzburger, A; Sandaker, H; Santamarina Rios, C; Santi, L; Santoni, C; Saraiva, J G; Sarri, F; Sauvage, G; Says, L P; Schaefer, M; Schegelsky, V A; Schiavi, C; Schieck, J; Schlager, G; Schlereth, J; Schmitt, C; Schultes, J; Schwemling, P; Schwindling, J; Seixas, J M; Seliverstov, D M; Serin, L; Sfyrla, A; Shalanda, N; Shaw, C; Shin, T; Shmeleva, A; Silva, J; Simion, S; Simonyan, M; Sloper, J E; Smirnov, S Yu; Smirnova, L; Solans, C; Solodkov, A; Solovianov, O; Soloviev, I; Sosnovtsev, V V; Spano, F; Speckmayer, P; Stancu, S; Stanek, R; Starchenko, E; Straessner, A; Suchkov, S I; Suk, M; Szczygiel, R; Tarrade, F; Tartarelli, F; Tas, P; Tayalati, Y; Tegenfeldt, F; Teuscher, R; Thioye, M; Tikhomirov, V O; Timmermans, C; Tisserant, S; Toczek, B; Tremblet, L; Troncon, C; Tsiareshka, P; Tyndel, M; Karagoez Unel, M; Unal, G; Unel, G; Usai, G; Van Berg, R; Valero, A; Valkar, S; Valls, J A; Vandelli, W; Vannucci, F; Vartapetian, A; Vassilakopoulos, V I; Vasilyeva, L; Vazeille, F; Vernocchi, F; Vetter-Cole, Y; Vichou, I; Vinogradov, V; Virzi, J; Vivarelli, I; De Vivie, J B; Volpi, M; Vu Anh, T; Wang, C; Warren, M; Weber, J; Weber, M; Weidberg, A R; Weingarten, J; Wells, P S; Werner, P; Wheeler, S; Wiessmann, M; Wilkens, H; Williams, H H; Wingerter-Seez, I; Yasu, Y; Zaitsev, A; Zenin, A; Zenis, T; Zenonos, Z; Zhang, H; Zhelezko, A; Zhou, N
2010-01-01
The response of the ATLAS barrel calorimeter to pions with momenta from $2$ to $180$~GeV~ is studied in a test--beam at the CERN H8 beam line. %Various methods to reconstruct the deposited pion energies are studied. The mean energy, the energy resolution and the longitudinal and radial shower profiles, and, various observables characterising the shower topology in the calorimeter are measured. The data are compared to Monte Carlo simulations based on a detailed description of the experimental set--up and on various models describing the interaction of particles with matter based on Geant4.
A keff calculation method by Monte Carlo
International Nuclear Information System (INIS)
Shen, H; Wang, K.
2008-01-01
The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)
Experience with the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)
2007-06-15
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.
Experience with the Monte Carlo Method
International Nuclear Information System (INIS)
Hussein, E.M.A.
2007-01-01
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Wals, A.F.; Hobbs, B.F.; Rijkers, F.A.M.
2004-05-01
The conjectured transmission price response model presented in the first of this two-paper series considers the expectations of oligopolistic generators regarding how demands for transmission services affect the prices of those services. Here, the model is applied to northwest Europe, simulating a mixed transmission pricing system including export fees, a path-based auction system for between-country interfaces, and implicit congestion-based pricing of internal country constraints. The path-based system does not give credit for counterflows when calculating export capability. The application shows that this no-netting policy can exacerbate the economic inefficiencies caused by oligopolistic pricing by generators. The application also illustrates the effects of different generator conjectures regarding rival supply responses and transmission prices. If generators anticipate that their increased demand for transmission services will increase transmission prices, then competitive intensity diminishes and energy prices rise. In the example here, the effect of this anticipation is to double the price increase that results from oligopolistic (Cournot) competition among generators
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...
Energy Technology Data Exchange (ETDEWEB)
Dickens, J.K.
1988-04-01
This document provides a discussion of the development of the FORTRAN Monte Carlo program SCINFUL (for scintillator full response), a program designed to provide a calculated full response anticipated for either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator. The program may also be used to compute angle-integrated spectra of charged particles (p, d, t, /sup 3/He, and ..cap alpha..) following neutron interactions with /sup 12/C. Extensive comparisons with a variety of experimental data are given. There is generally overall good agreement (<10% differences) of results from SCINFUL calculations with measured detector responses, i.e., N(E/sub r/) vs E/sub r/ where E/sub r/ is the response pulse height, reproduce measured detector responses with an accuracy which, at least partly, depends upon how well the experimental configuration is known. For E/sub n/ < 16 MeV and for E/sub r/ > 15% of the maximum pulse height response, calculated spectra are within +-5% of experiment on the average. For E/sub n/ up to 50 MeV similar good agreement is obtained with experiment for E/sub r/ > 30% of maximum response. For E/sub n/ up to 75 MeV the calculated shape of the response agrees with measurements, but the calculations underpredicts the measured response by up to 30%. 65 refs., 64 figs., 3 tabs.
A Monte Carlo Green's function method for three-dimensional neutron transport
International Nuclear Information System (INIS)
Gamino, R.G.; Brown, F.B.; Mendelson, M.R.
1992-01-01
This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution
Khrutchinsky, Arkady; Drozdovitch, Vladimir; Kutsen, Semion; Minenko, Victor; Khrouch, Valeri; Luckyanov, Nickolas; Voillequé, Paul; Bouville, André
2012-01-01
This paper presents results of Monte Carlo modeling of the SRP-68-01 survey meter used to measure exposure rates near the thyroid glands of persons exposed to radioactivity following the Chernobyl accident. This device was not designed to measure radioactivity in humans. To estimate the uncertainty associated with the measurement results, a mathematical model of the SRP-68-01 survey meter was developed and verified. A Monte Carlo method of numerical simulation of radiation transport has been used to calculate the calibration factor for the device and evaluate its uncertainty. The SRP-68-01 survey meter scale coefficient, an important characteristic of the device, was also estimated in this study. The calibration factors of the survey meter were calculated for 131I, 132I, 133I, and 135I content in the thyroid gland for six age groups of population: newborns; children aged 1 yr, 5 yr, 10 yr, 15 yr; and adults. A realistic scenario of direct thyroid measurements with an “extended” neck was used to calculate the calibration factors for newborns and one-year-olds. Uncertainties in the device calibration factors due to variability of the device scale coefficient, variability in thyroid mass and statistical uncertainty of Monte Carlo method were evaluated. Relative uncertainties in the calibration factor estimates were found to be from 0.06 for children aged 1 yr to 0.1 for 10-yr and 15-yr children. The positioning errors of the detector during measurements deviate mainly in one direction from the estimated calibration factors. Deviations of the device position from the proper geometry of measurements were found to lead to overestimation of the calibration factor by up to 24 percent for adults and up to 60 percent for 1-yr children. The results of this study improve the estimates of 131I thyroidal content and, consequently, thyroid dose estimates that are derived from direct thyroid measurements performed in Belarus shortly after the Chernobyl accident. PMID:22245289
Floor response spectra of WWER-1000, NPP Kozloduy generated from local seismic excitation
International Nuclear Information System (INIS)
Bojadziev, Z.; Kostov, M.
1996-01-01
The seismic review level characteristics for the Kozloduy NPP site were set to 0.2 g and a respective free field acceleration response spectra were derived after a profound site conformation project. Accordingly a separate investigation is recommended for local seismic excitation. The goals of the analyses are: to define the seismic motion characteristics from local seismic sources; to perform structural analyses and in-structure spectra generation for local seismic excitation; and to compare the forces (spectra) from local events with those generated as seismic design review basis
Next-Generation Library Catalogs and the Problem of Slow Response Time
Directory of Open Access Journals (Sweden)
Margaret Brown-Sica
2010-12-01
Full Text Available Response time as defined for this study is the time that it takes for all files that constitute a single webpage to travel across the Internet from a Web server to the end user’s browser. In this study, the authors tested response times on queries for identical items in five different library catalogs, one of them a next-generation (NextGen catalog. The authors also discuss acceptable response time and how it may affect the discovery process. They suggest that librarians and vendors should develop standards for acceptable response time and use it in the product selection and development processes.
A method to generate generic floor response spectra for operating nuclear power plants
International Nuclear Information System (INIS)
Curreri, J.; Costantino, C.; Subudhi, M.; Reich, M.
1985-01-01
A free-field earthquake response spectra was used to generate horizontal earthquake time histories. The excitation was applied through the soil and into the various structures to produce responses in equipment. An entire range of soil conditions was used with each structure, from soft soil to solid rock. Actual PWR and BWR - Mark I structural models were used as representative of a class of structures. For each model, the stiffness properties were varied, with the same mass, so as to extend the fundamental base structure natural frequency from 2 cps to 36 cps. This resulted in fundamental mode coupled natural frequencies as low as 0.86 cps and as high as 30 cps. From all of these models of soils and structures, floor response spectra were generated at each floor level. The natural frequencies of the structures were varied to obtain maximum response conditions. The actual properties were first used to locate the natural frequencies. The stiffness properties were than varied, with the same mass, to extend the range of the fundamental base structure natural frequency. The intention was to have the coupled structural material frequencies in the vicinity of the peak amplitude frequency content of the excitation spectrum. Particular attention was therefore given to the frequency band between 2 Hz and 4 Hz. A horizontal generic floor response spectra is proposed for the top level of a generic structure. Reduction factors are applied to the peak acceleration for equipment at lower levels. (orig./HP)
Definition of Distribution Network Tariffs Considering Distribution Generation and Demand Response
DEFF Research Database (Denmark)
Soares, Tiago; Faria, Pedro; Vale, Zita
2014-01-01
The use of distribution networks in the current scenario of high penetration of Distributed Generation (DG) is a problem of great importance. In the competitive environment of electricity markets and smart grids, Demand Response (DR) is also gaining notable impact with several benefits for the wh......The use of distribution networks in the current scenario of high penetration of Distributed Generation (DG) is a problem of great importance. In the competitive environment of electricity markets and smart grids, Demand Response (DR) is also gaining notable impact with several benefits...... the determination of topological distribution factors, and consequent application of the MW-mile method. The application of the proposed tariffs definition methodology is illustrated in a distribution network with 33 buses, 66 DG units, and 32 consumers with DR capacity...
Human response to individually controlled micro environment generated with localized chilled beam
DEFF Research Database (Denmark)
Uth, Simon C.; Nygaard, Linette; Bolashikov, Zhecho Dimitrov
2014-01-01
Indoor environment in a single-office room created by a localised chilled beam with individual control of the primary air flow was studied. Response of 24 human subjects when exposed to the environment generated by the chilled beam was collected via questionnaires under a 2-hour exposure including...... and local thermal sensation reported by the subjects with the two systems. Both systems were equally acceptable. At 26°C the individual control of the localised chilled beam lead to higher acceptability of the work environment. At 28°C the acceptability decreased with the two systems. It was not acceptable...... different work tasks at three locations in the room. Response of the subjects to the environment generated with a chilled ceiling combined with mixing air distribution was used for comparison. The air temperature in the room was kept at 26 or 28 °C. Results show no significant difference in the overall...
The application of weight windows to 'Global' Monte Carlo problems
International Nuclear Information System (INIS)
Becker, T. L.; Larsen, E. W.
2009-01-01
This paper describes two basic types of global deep-penetration (shielding) problems-the global flux problem and the global response problem. For each of these, two methods for generating weight windows are presented. The first approach, developed by the authors of this paper and referred to generally as the Global Weight Window, constructs a weight window that distributes Monte Carlo particles according to a user-specified distribution. The second approach, developed at Oak Ridge National Laboratory and referred to as FW-CADIS, constructs a weight window based on intuitively extending the concept of the source-detector problem to global problems. The numerical results confirm that the theory used to describe the Monte Carlo particle distribution for a given weight window is valid and that the figure of merit is strongly correlated to the Monte Carlo particle distribution. Furthermore, they illustrate that, while both methods are capable of obtaining the correct solution, the Global Weight Window distributes particles much more uniformly than FW-CADIS. As a result, the figure of merit is higher for the Global Weight Window. (authors)
Hydrodynamic response of fuel rod with longitudinal fins to upstream generated vortices
International Nuclear Information System (INIS)
Naot, D.; Oron, A.; Technion-Israel Inst. of Tech., Haifa. Dept. of Mechanical Engineering)
1984-01-01
The hydrodynamic response of turbulent channel flow to upstream generated vortices was numerically simulated for fuel element with longitudinal cooling fins. Turbulence is modelled by an algebraic stress model and an energy-dissipation model. The developing flow is solved using a parabolic pressure correction algorithm. The decay of the initial vortices in non-circular sub-channel in the presence of geometry driven secondary currents is described and the uncertainty in the local turbulent shear stresses is discussed. (orig.)
Agosta, John Mark
2013-01-01
This paper works through the optimization of a real world planning problem, with a combination of a generative planning tool and an influence diagram solver. The problem is taken from an existing application in the domain of oil spill emergency response. The planning agent manages constraints that order sets of feasible equipment employment actions. This is mapped at an intermediate level of abstraction onto an influence diagram. In addition, the planner can apply a surveillance operator that...
Fernández, David Lorente
2015-01-01
This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous. © 2015 Elsevier Inc. All rights reserved.
Mass Market Demand Response and Variable Generation Integration Issues: A Scoping Study
Energy Technology Data Exchange (ETDEWEB)
Cappers, Peter; Mills, Andrew; Goldman, Charles; Wiser, Ryan; Eto, Joseph H.
2011-09-10
This scoping study focuses on the policy issues inherent in the claims made by some Smart Grid proponents that the demand response potential of mass market customers which is enabled by widespread implementation of Advanced Metering Infrastructure (AMI) through the Smart Grid could be the “silver bullet” for mitigating variable generation integration issues. In terms of approach, we will: identify key issues associated with integrating large amounts of variable generation into the bulk power system; identify demand response opportunities made more readily available to mass market customers through widespread deployment of AMI systems and how they can affect the bulk power system; assess the extent to which these mass market Demand Response (DR) opportunities can mitigate Variable Generation (VG) integration issues in the near-term and what electricity market structures and regulatory practices could be changed to further expand the ability for DR to mitigate VG integration issues over the long term; and provide a qualitative comparison of DR and other approaches to mitigate VG integration issues.
Method to generate generic floor response spectra for operating nuclear power plant
International Nuclear Information System (INIS)
Curreri, J.; Costantino, C.; Subudhi, M.; Reich, M.
1985-01-01
The general approach in the development of the response spectra was to study the effects on the dynamic characteristics of each of the elements in the chain of events that goes between the loads and the responses. This includes the loads, the soils and the structures. A free-field earthquake response spectra was used to generate horizontal earthquake time histories. The excitation was applied through the soil and into the various structures to produce responses in equipment. An entire range of soil conditions was used with each structure, from soft soil to solid rock. Actual PWR and BWR - Mark I structural models were used as representative of a class of structures. For each model, the stiffness properties were varied, with the same mass, so as to extend the fundamental base structure natural frequency from 2 cps to 36 cps. This resulted in fundamental mode coupled natural frequencies as low as 0.86 cps and as high as 30 cps. From all of these models of soils and structures, floor response spectra were generated at each floor level. The natural frequencies of the structures were varied to obtain maximum response conditions. The actual properties were first used to locate the natural frequencies. The stiffness properties were then varied, with the same mass, to extend the range of the fundamental base structure natural frequency. The intention was to have the coupled structural material frequencies in the vicinity of the peak amplitude frequency content of the excitation spectrum. Particular attention was therefore given to the frequency band between 2 Hz and 4 Hz. A horizontal generic floor response spectra is proposed for the top level of a generic structure. Reduction factors are applied to the peak acceleration for equipment at lower levels
Kersevan, Borut Paul; Richter-Waş, Elzbieta
2013-03-01
The AcerMC Monte Carlo generator is dedicated to the generation of Standard Model background processes which were recognised as critical for the searches at LHC, and generation of which was either unavailable or not straightforward so far. The program itself provides a library of the massive matrix elements (coded by MADGRAPH) and native phase space modules for generation of a set of selected processes. The hard process event can be completed by the initial and the final state radiation, hadronisation and decays through the existing interface with either PYTHIA, HERWIG or ARIADNE event generators and (optionally) TAUOLA and PHOTOS. Interfaces to all these packages are provided in the distribution version. The phase-space generation is based on the multi-channel self-optimising approach using the modified Kajantie-Byckling formalism for phase space construction and further smoothing of the phase space was obtained by using a modified ac-VEGAS algorithm. An additional improvement in the recent versions is the inclusion of the consistent prescription for matching the matrix element calculations with parton showering for a select list of processes. Catalogue identifier: ADQQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3853309 No. of bytes in distributed program, including test data, etc.: 68045728 Distribution format: tar.gz Programming language: FORTRAN 77 with popular extensions (g77, gfortran). Computer: All running Linux. Operating system: Linux. Classification: 11.2, 11.6. External routines: CERNLIB (http://cernlib.web.cern.ch/cernlib/), LHAPDF (http://lhapdf.hepforge.org/) Catalogue identifier of previous version: ADQQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 149(2003)142 Does
Monte Carlo techniques for analyzing deep-penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1986-01-01
Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications
Monte Carlo codes and Monte Carlo simulator program
International Nuclear Information System (INIS)
Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.
1990-03-01
Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)
A microculture method for the generation of primary immune responses in vitro.
Pike, B L
1975-11-01
A microculture method for the generation and study of the primary immune response of murine spleen cells to defined antigens in vitro is described. Many of the variable parameters which occur in culture systems have been studied in an attempt to define the optimal culture conditions for this system. Cultures of 10(6) CBA spleen cells consistently produced an immune response of 300-600 hapten-specific plaque-forming cells after 3 days of incubation with the T cell-independent antigens DNP-POL and NIP-POL. Cultures were set up in Microtest II tissue culture plates in a volume of 0.2 ml of medium containing 10(-4) M 2-mercaptoethanol. The system described has the advantages of being highly efficient and reproducible and utilises small amounts of cells, medium and antigen. It provides a simple, economic and reliable approach for the systematic study of the immune response in vitro.
Li, Yi-Chen; Zhang, Yu Shrike; Akpek, Ali; Shin, Su Ryon; Khademhosseini, Ali
2016-12-02
Four-dimensional (4D) bioprinting, encompassing a wide range of disciplines including bioengineering, materials science, chemistry, and computer sciences, is emerging as the next-generation biofabrication technology. By utilizing stimuli-responsive materials and advanced three-dimensional (3D) bioprinting strategies, 4D bioprinting aims to create dynamic 3D patterned biological structures that can transform their shapes or behavior under various stimuli. In this review, we highlight the potential use of various stimuli-responsive materials for 4D printing and their extension into biofabrication. We first discuss the state of the art and limitations associated with current 3D printing modalities and their transition into the inclusion of the additional time dimension. We then suggest the potential use of different stimuli-responsive biomaterials as the bioink that may achieve 4D bioprinting where transformation of fabricated biological constructs can be realized. We finally conclude with future perspectives.
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
Monte Carlo Numerical Models for Nuclear Logging Applications
Directory of Open Access Journals (Sweden)
Fusheng Li
2012-06-01
Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models
Research on the response of various persons to information about nuclear power generation
International Nuclear Information System (INIS)
Maruta, Katsuhiko
2014-01-01
The author surveyed blogs readily available on the Internet for three purposes: (1) to grasp the public response to nuclear problems after the accident at the Fukushima Daiichi Nuclear Power Station, (2) to determine changes in the number of blogs based on an article search, and (3) to identify the stance of bloggers on the necessity of nuclear power generation based on reading contribution contents. Furthermore the author conducted a questionnaire survey of public response in reference to the results of the blog survey. From the blog survey, it was found that immediately after the accident, the number of blogs which were negative toward nuclear power generation drastically increased, but as time has passed, blogs which are positive are increasing in number somewhat in expectation of stabilized economic and living conditions. The main results of the questionnaire survey are as follows. (1) Many persons want power generation that is non-nuclear; this is because they have good expectations for renewable energy sources or new thermal power generation as an alternative energy and they strongly feel anxious about the issue of disposal of spent nuclear fuel. (2) Because of the risk of negative impacts which electricity shortages bring on the economy and lifestyles, some persons do not want immediate decommissioning of nuclear power reactors, they favor a phase-out of nuclear power generation. Though public opinion about nuclear problems includes the expectation that one alternative energy can be selected, there is a possibility that this opinion will shift to find an optimum energy mix of plural energy sources. (author)
Kasesaz, Y; Khalafi, H; Rahmani, F
2013-12-01
Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method. Copyright © 2013 Elsevier Ltd. All rights reserved.
Brushless DC motor control system responsive to control signals generated by a computer or the like
Packard, Douglas T. (Inventor); Schmitt, Donald E. (Inventor)
1987-01-01
A control system for a brushless DC motor responsive to digital control signals is disclosed. The motor includes a multiphase wound stator and a permanent magnet rotor. The rotor is arranged so that each phase winding, when energized from a DC source, will drive the rotor through a predetermined angular position or step. A commutation signal generator responsive to the shaft position provides a commutation signal for each winding. A programmable control signal generator such as a computer or microprocessor produces individual digital control signals for each phase winding. The control signals and commutation signals associated with each winding are applied to an AND gate for that phase winding. Each gate controls a switch connected in series with the associated phase winding and the DC source so that each phase winding is energized only when the commutation signal and the control signal associated with that phase winding are present. The motor shaft may be advanced one step at a time to a desired position by applying a predetermined number of control signals in the proper sequence to the AND gates and the torque generated by the motor may be regulated by applying a separate control signal to each AND gate which is pulse width modulated to control the total time that each switch connects its associated winding to the DC source during each commutation period.
Trans-generational responses to low pH depend on parental gender in a calcifying tubeworm
Lane, Ackley; Campanati, Camilla; Dupont, Sam; Thiyagarajan, Vengatesen
2015-01-01
The uptake of anthropogenic CO2 emissions by oceans has started decreasing pH and carbonate ion concentrations of seawater, a process called ocean acidification (OA). Occurring over centuries and many generations, evolutionary adaptation and epigenetic transfer will change species responses to OA over time. Trans-generational responses, via genetic selection or trans-generational phenotypic plasticity, differ depending on species and exposure time as well as differences between individuals su...
Monte Carlo electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs
Monte Carlo applications to radiation shielding problems
International Nuclear Information System (INIS)
Subbaiah, K.V.
2009-01-01
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation
Generation of floor response spectra for mixed-oxide fuel fabrication plants
International Nuclear Information System (INIS)
Arthur, D.F.; Murray, R.C.; Tokarz, F.J.
1975-01-01
Floor or amplified response spectra are generally used as input motion for seismic analysis of critical equipment and piping in nuclear power plants and related facilities. The floor spectra are normally the result of a time-history calculation of building response to ground shaking. However, alternate approximate methods have been suggested by both Kapur and Biggs. As part of a study for the Nuclear Regulatory Commission horizontal floor response spectra were generated and compared by all three methods. The dynamic analyses were performed on a model of the Westinghouse Recycle Fuels Plant Manufacturing Building (MOFFP). Input to the time-history calculations was a synthesized accelerogram whose response spectrum is similar to that in Regulatory Guide 1.60. The response spectrum of the synthetic ground motion was used as input to the Kapur and Biggs methods. Calculations were performed for both hard (3500 fps) and soft (1500 fps) foundation soils. Results of comparison of the three methods indicate that although the approximate methods could easily be made acceptable from a safety standpoint, they would be overly conservative. The time-history method will yield floor spectra which are less uncertain and less conservative for a relatively modest additional effort. (auth)
Energy Technology Data Exchange (ETDEWEB)
Tatara, C.P.; Mulvey, M.; Newman, M.C.
1999-12-01
Genetic and demographic responses of mosquitofish were examined after multiple generations of exposure to mercury. Previous studies of acute lethal exposures of mosquitofish to either mercury or arsenic demonstrated a consistent correlation between time to death and genotype at the glucosephosphate isomerase-2 (Gpi-2) locus. A mesocosm study involving mosquitofish populations exposed to mercury for 111 d showed significant female sexual selection and fecundity selection at the Gpi-2 locus. Here the mesocosm study was extended to populations exposed to mercury for several (approx. four) generations. After 2 years, control and mercury-exposed populations met Hardy-Weinberg expectations and showed no evidence of genetic bottlenecks. The mean number of heterozygous loci did not differ significantly between the mercury-exposed and control populations. Significant differences in allele frequencies at the Gpi-2 locus were observed between the mercury-exposed and control populations. Relative to the initial and control allele frequencies, the GPI-2{sup 100} allele frequency was lower, the Gpi-2{sup 66} allele frequency increased, but the Gpi-2{sup 38} allele frequency did not change in mercury-exposed populations. No significant differences were found in standard length, weight, sex ratio, or age class ratio between the control and mercury-exposed populations. Allele frequency changes at the Gpi-2 locus suggest population-level response to chronic mercury exposure. Changes in allele frequency may be useful as indicators of population response to contaminants, provided that the population in question is well understood.
Generation of artificial time-histories, rich in all frequencies from given response spectra
International Nuclear Information System (INIS)
Levy, S.; Wilkinson, J.P.D.
1975-01-01
In order to apply the time-history method of seismic analysis, it is often desirable to generate a suitable artificial time-history from a given response spectrum. The method described in this paper allows the generation of such a time-history that is also rich in all frequencies in the spectrum. This richness is achieved by choosing a large number of closely-spaced frequency points such that the adjacent frequencies have their half-power points overlap. The adjacent frequencies satisfy the condition that the frequency interval Δf near a given frequency f is such that (Δf)/f<2c/csub(c) where c is the damping of the system and csub(c) is the critical damping. In developing an artificial time-history, it is desirable to specify the envelope and duration of the record, very often in such a manner as to reproduce the envelope property of a specific earthquake record, and such an option is available in the method described. Examples are given of the development of typical articifial time-histories from earthquake design response spectra and from floor response spectra. (Auth.)
Generation of artificial time-histories, rich in all frequencies, from given response spectra
International Nuclear Information System (INIS)
Levy, S.; Wilkinson, J.P.D.
1975-01-01
In order to apply the time-history method of seismic analysis, it is often desirable to generate a suitable artificial time-history from a given response spectrum. The method described allows the generation of such a time-history that is also rich in all frequencies in the spectrum. This richness is achieved by choosing a large number of closely-spaced frequency points such that the adjacent frequencies have their half-power points overlap. The adjacent frequencies satisfy the condition that the frequency interval Δf near a given frequency f is such that (Δf)/f<2c/csub(c) where c is the damping of the system and csub(c) is the critical damping. In developing an artificial time-history, it is desirable to specify the envelope and duration of the record, very often in such a manner as to reproduce the envelope property of a specific earthquake record, and such an option is available in the method described. Examples are given of the development of typical artificial time-histories from earthquake design response spectra and from floor response spectra
Microvascular pressure responses of second-generation rats chronically exposed to 2 g centrifugation
Richardson, D. R.; Knapp, C. F.
1977-01-01
Preliminary results are presented for a study aimed at a quantitative comparison of microvascular dynamics in second-generation rats reared in a 2-g force field produced by centrifugation with similar data from animals reared in a centrifuge that produced only a 1-g force. It is shown that the pressure distribution in the mesenteric microvasculature of the second generation of rats reared in a 2-g environment, as well as the animals' blood pressure response to epinephrine, are significantly different compared to their 1-g counterparts. In particular, 1-g and 2-g chronic centrifugation enhances the arterial blood pressure, and the 2-g force field attenuates the pressor effects of norepinephrine.
Prediction of power system frequency response after generator outages using neural nets
Energy Technology Data Exchange (ETDEWEB)
Djukanovic, M B; Popovic, D P [Electrotechnicki Inst. ' Nikola Tesla' , Belgrade (Yugoslavia); Sobajic, D J; Pao, Y -H [Case Western Reserve Univ., Cleveland, OH (United States)
1993-09-01
A new methodology is presented for estimating the frequency behaviour of power systems necessary for an indication of under-frequency load shedding in steady-state security assessment. It is well known that large structural disturbances such as generator tripping or load outages can initiate cascading outages, system separation into islands, and even the complete breakup. The approach provides a fairly accurate method of estimating the system average frequency response without making simplifications or neglecting non-linearities and small time constants in the equations of generating units, voltage regulators and turbines. The efficiency of the new procedure is demonstrated using the New England power system model for a series of characteristic perturbations. The validity of the proposed approach is verified by comparison with the simulation of short-term dynamics including effects of control and automatic devices. (author)
Neutron spectrum unfolding using genetic algorithm in a Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Suman, Vitisha [Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Sarkar, P.K., E-mail: pksarkar02@gmail.com [Manipal Centre for Natural Sciences, Manipal University, Manipal 576104 (India)
2014-02-11
A spectrum unfolding technique GAMCD (Genetic Algorithm and Monte Carlo based spectrum Deconvolution) has been developed using the genetic algorithm methodology within the framework of Monte Carlo simulations. Each Monte Carlo history starts with initial solution vectors (population) as randomly generated points in the hyper dimensional solution space that are related to the measured data by the response matrix of the detection system. The transition of the solution points in the solution space from one generation to another are governed by the genetic algorithm methodology using the techniques of cross-over (mating) and mutation in a probabilistic manner adding new solution points to the population. The population size is kept constant by discarding solutions having lesser fitness values (larger differences between measured and calculated results). Solutions having the highest fitness value at the end of each Monte Carlo history are averaged over all histories to obtain the final spectral solution. The present method shows promising results in neutron spectrum unfolding for both under-determined and over-determined problems with simulated test data as well as measured data when compared with some existing unfolding codes. An attractive advantage of the present method is the independence of the final spectra from the initial guess spectra.
Neutron spectrum unfolding using genetic algorithm in a Monte Carlo simulation
International Nuclear Information System (INIS)
Suman, Vitisha; Sarkar, P.K.
2014-01-01
A spectrum unfolding technique GAMCD (Genetic Algorithm and Monte Carlo based spectrum Deconvolution) has been developed using the genetic algorithm methodology within the framework of Monte Carlo simulations. Each Monte Carlo history starts with initial solution vectors (population) as randomly generated points in the hyper dimensional solution space that are related to the measured data by the response matrix of the detection system. The transition of the solution points in the solution space from one generation to another are governed by the genetic algorithm methodology using the techniques of cross-over (mating) and mutation in a probabilistic manner adding new solution points to the population. The population size is kept constant by discarding solutions having lesser fitness values (larger differences between measured and calculated results). Solutions having the highest fitness value at the end of each Monte Carlo history are averaged over all histories to obtain the final spectral solution. The present method shows promising results in neutron spectrum unfolding for both under-determined and over-determined problems with simulated test data as well as measured data when compared with some existing unfolding codes. An attractive advantage of the present method is the independence of the final spectra from the initial guess spectra
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Alsharif, Ala'a; Kruger, Estie; Tennant, Marc
2012-10-01
Over the past twenty-five years, there has been a substantial increase in work-based demands, thought to be due to an intensifying, competitive work environment. However, more recently, the question of work-life balance is increasingly attracting attention. The purpose of this study was to discover the attitudes of the next generation of dentists in Australia to parenting responsibility and work-life balance perceptions. Questionnaires on work-life balance were distributed to all fourth-year students at three dental schools in Australia. A total of 137 (76 percent) surveys were completed and returned. Most respondents indicated that they would take time off to focus on childcare, and just over half thought childcare should be shared by both parents. Thirty-seven percent felt that a child would have a considerable effect on their careers. Differences were seen in responses when compared by gender. The application of sensitivity analysis to workforce calculations based around changing societal work-life expectations can have substantial effects on predicting workforce data a decade into the future. It is not just the demographic change to a more feminized workforce in Australia that can have substantial effect, but also the change in social expectations of males in regards to parenting.
A New Generation Draws The Line: Humanitarian Intervention And The “Responsibility To Protect” Today
Directory of Open Access Journals (Sweden)
M. Rastovic
2017-01-01
Full Text Available Book review: Chomsky N. A New Generation Draws the Line: Humanitarian Intervention and the “Responsibility to Protect” Today.Boulder: Paradigm Publishers, 2012. 176 p. The book under review examines controversial norm of “humanitarian intervention”. It clearly demonstrates that the norm was used selectively and with different argumentations in various situations. Noam Chomsky has managed to present a fair and balanced account of positive and negative aspects of humanitarian interventions as well as provide thought-provoking policy recommendations for improving human rights protection.
DEFF Research Database (Denmark)
Wang, Qi
Distributed energy resources (DERs), such as distributed generation (DG) and demand response (DR), have been recognized worldwide as valuable resources. High integration of DG and DR in the distribution network inspires a potential deregulated environment for the distribution company (DISCO...... in the presented DL market and transact with TL real-time market. A one-leader multi-follower-type bi-level model is proposed to indicate the PDISCO's trading strategies. To participate in the TL real-time market, a methodology is presented to derive continuous bidding/offering strategies for a PDISCO. A bi...
Validated Models for Radiation Response and Signal Generation in Scintillators: Final Report
Energy Technology Data Exchange (ETDEWEB)
Kerisit, Sebastien N.; Gao, Fei; Xie, YuLong; Campbell, Luke W.; Van Ginhoven, Renee M.; Wang, Zhiguo; Prange, Micah P.; Wu, Dangxin
2014-12-01
This Final Report presents work carried out at Pacific Northwest National Laboratory (PNNL) under the project entitled “Validated Models for Radiation Response and Signal Generation in Scintillators” (Project number: PL10-Scin-theor-PD2Jf) and led by Drs. Fei Gao and Sebastien N. Kerisit. This project was divided into four tasks: 1) Electronic response functions (ab initio data model) 2) Electron-hole yield, variance, and spatial distribution 3) Ab initio calculations of information carrier properties 4) Transport of electron-hole pairs and scintillation efficiency Detailed information on the results obtained in each of the four tasks is provided in this Final Report. Furthermore, published peer-reviewed articles based on the work carried under this project are included in Appendix. This work was supported by the National Nuclear Security Administration, Office of Nuclear Nonproliferation Research and Development (DNN R&D/NA-22), of the U.S. Department of Energy (DOE).
GE781: a Monte Carlo package for fixed target experiments
Davidenko, G.; Funk, M. A.; Kim, V.; Kuropatkin, N.; Kurshetsov, V.; Molchanov, V.; Rud, S.; Stutte, L.; Verebryusov, V.; Zukanovich Funchal, R.
The Monte Carlo package for the fixed target experiment B781 at Fermilab, a third generation charmed baryon experiment, is described. This package is based on GEANT 3.21, ADAMO database and DAFT input/output routines.
Monte Carlo studies of uranium calorimetry
International Nuclear Information System (INIS)
Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.
1985-01-01
Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
on the development of nuclear weapons in Los Alamos ..... cantly improved the paper. ... Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33– ... The computer algorithms are usually based on a random seed that starts the ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
International Nuclear Information System (INIS)
Quevedo, Ana Luiza; Nicolucci, Patricia; Borges, Leandro F.
2016-01-01
In this work a comparison of experimental and simulated relative doses of a clinical brachytherapy source was performed. A 5 x 5 x 7 cm"3 phantom with a modified MAGIC-f gel was irradiated using a clinical "1"9"2Ir source and read using Magnetic Resonance Imaging. The Monte Carlo simulation package PENELOPE was used to simulate the dose distributions of the same radiation source. The dose distributions were obtained in two planes perpendicular to the source: one passing through the source's center and the other at 0.5 cm away from the source's center. The higher differences found between experimental and computational distributions were 12.5% at a point 0.62 cm from the source for the central plane and 8.6% at 1.3 cm from the source to the plane 0.5 cm away from the source's center. Considering the high dose gradient of these dose distributions, the results obtained show that the modified MAGIC-f gel is promising for brachytherapy dosimetry. (author)
International Nuclear Information System (INIS)
Islamian, Jalil Pirayesh; Toossi, Mohammad Taghi Bahreyni; Momennezhad, Mahdi; Zakavi, Seyyed Rasoul; Sadeghi, Ramin; Ljungberg, Michael
2012-01-01
In single photon emission computed tomography (SPECT), the collimator is a crucial element of the imaging chain and controls the noise resolution tradeoff of the collected data. The current study is an evaluation of the effects of different thicknesses of a low-energy high-resolution (LEHR) collimator on tomographic spatial resolution in SPECT. In the present study, the SIMIND Monte Carlo program was used to simulate a SPECT equipped with an LEHR collimator. A point source of 99m Tc and an acrylic cylindrical Jaszczak phantom, with cold spheres and rods, and a human anthropomorphic torso phantom (4D-NCAT phantom) were used. Simulated planar images and reconstructed tomographic images were evaluated both qualitatively and quantitatively. According to the tabulated calculated detector parameters, contribution of Compton scattering, photoelectric reactions, and also peak to Compton (P/C) area in the obtained energy spectrums (from scanning of the sources with 11 collimator thicknesses, ranging from 2.400 to 2.410 cm), we concluded the thickness of 2.405 cm as the proper LEHR parallel hole collimator thickness. The image quality analyses by structural similarity index (SSIM) algorithm and also by visual inspection showed suitable quality images obtained with a collimator thickness of 2.405 cm. There was a suitable quality and also performance parameters’ analysis results for the projections and reconstructed images prepared with a 2.405 cm LEHR collimator thickness compared with the other collimator thicknesses
Directory of Open Access Journals (Sweden)
Atsuhito Toyomaki
Full Text Available Several studies of self-monitoring dysfunction in schizophrenia have focused on the sense of agency to motor action using behavioral and psychophysiological techniques. So far, no study has ever tried to investigate whether the sense of agency or causal attribution for external events produced by self-generated decision-making is abnormal in schizophrenia. The purpose of this study was to investigate neural responses to feedback information produced by self-generated or other-generated decision-making in a multiplayer gambling task using even-related potentials and electroencephalogram synchronization. We found that the late positive component and theta/alpha synchronization were increased in response to feedback information in the self-decision condition in normal controls, but that these responses were significantly decreased in patients with schizophrenia. These neural activities thus reflect the self-reference effect that affects the cognitive appraisal of external events following decision-making and their impairment in schizophrenia.
Toyomaki, Atsuhito; Hashimoto, Naoki; Kako, Yuki; Murohashi, Harumitsu; Kusumi, Ichiro
2017-01-01
Several studies of self-monitoring dysfunction in schizophrenia have focused on the sense of agency to motor action using behavioral and psychophysiological techniques. So far, no study has ever tried to investigate whether the sense of agency or causal attribution for external events produced by self-generated decision-making is abnormal in schizophrenia. The purpose of this study was to investigate neural responses to feedback information produced by self-generated or other-generated decision-making in a multiplayer gambling task using even-related potentials and electroencephalogram synchronization. We found that the late positive component and theta/alpha synchronization were increased in response to feedback information in the self-decision condition in normal controls, but that these responses were significantly decreased in patients with schizophrenia. These neural activities thus reflect the self-reference effect that affects the cognitive appraisal of external events following decision-making and their impairment in schizophrenia.
Gardner, Robin P.; Xu, Libai
2009-10-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.
Simplified monte carlo simulation for Beijing spectrometer
International Nuclear Information System (INIS)
Wang Taijie; Wang Shuqin; Yan Wuguang; Huang Yinzhi; Huang Deqiang; Lang Pengfei
1986-01-01
The Monte Carlo method based on the functionization of the performance of detectors and the transformation of values of kinematical variables into ''measured'' ones by means of smearing has been used to program the Monte Carlo simulation of the performance of the Beijing Spectrometer (BES) in FORTRAN language named BESMC. It can be used to investigate the multiplicity, the particle type, and the distribution of four-momentum of the final states of electron-positron collision, and also the response of the BES to these final states. Thus, it provides a measure to examine whether the overall design of the BES is reasonable and to decide the physical topics of the BES
Monte Carlo verification of polymer gel dosimetry applied to radionuclide therapy: a phantom study
International Nuclear Information System (INIS)
Gear, J I; Partridge, M; Flux, G D; Charles-Edwards, E
2011-01-01
This study evaluates the dosimetric performance of the polymer gel dosimeter 'Methacrylic and Ascorbic acid in Gelatin, initiated by Copper' and its suitability for quality assurance and analysis of I-131-targeted radionuclide therapy dosimetry. Four batches of gel were manufactured in-house and sets of calibration vials and phantoms were created containing different concentrations of I-131-doped gel. Multiple dose measurements were made up to 700 h post preparation and compared to equivalent Monte Carlo simulations. In addition to uniformly filled phantoms the cross-dose distribution from a hot insert to a surrounding phantom was measured. In this example comparisons were made with both Monte Carlo and a clinical scintigraphic dosimetry method. Dose-response curves generated from the calibration data followed a sigmoid function. The gels appeared to be stable over many weeks of internal irradiation with a delay in gel response observed at 29 h post preparation. This was attributed to chemical inhibitors and slow reaction rates of long-chain radical species. For this reason, phantom measurements were only made after 190 h of irradiation. For uniformly filled phantoms of I-131 the accuracy of dose measurements agreed to within 10% when compared to Monte Carlo simulations. A radial cross-dose distribution measured using the gel dosimeter compared well to that calculated with Monte Carlo. Small inhomogeneities were observed in the dosimeter attributed to non-uniform mixing of monomer during preparation. However, they were not detrimental to this study where the quantitative accuracy and spatial resolution of polymer gel dosimetry were far superior to that calculated using scintigraphy. The difference between Monte Carlo and gel measurements was of the order of a few cGy, whilst with the scintigraphic method differences of up to 8 Gy were observed. A manipulation technique is also presented which allows 3D scintigraphic dosimetry measurements to be compared to polymer
Directory of Open Access Journals (Sweden)
Renata Santos
2017-06-01
Full Text Available Astrocyte dysfunction and neuroinflammation are detrimental features in multiple pathologies of the CNS. Therefore, the development of methods that produce functional human astrocytes represents an advance in the study of neurological diseases. Here we report an efficient method for inflammation-responsive astrocyte generation from induced pluripotent stem cells (iPSCs and embryonic stem cells. This protocol uses an intermediate glial progenitor stage and generates functional astrocytes that show levels of glutamate uptake and calcium activation comparable with those observed in human primary astrocytes. Stimulation of stem cell-derived astrocytes with interleukin-1β or tumor necrosis factor α elicits a strong and rapid pro-inflammatory response. RNA-sequencing transcriptome profiling confirmed that similar gene expression changes occurred in iPSC-derived and primary astrocytes upon stimulation with interleukin-1β. This protocol represents an important tool for modeling in-a-dish neurological diseases with an inflammatory component, allowing for the investigation of the role of diseased astrocytes in neuronal degeneration.
Santos, Renata; Vadodaria, Krishna C; Jaeger, Baptiste N; Mei, Arianna; Lefcochilos-Fogelquist, Sabrina; Mendes, Ana P D; Erikson, Galina; Shokhirev, Maxim; Randolph-Moore, Lynne; Fredlender, Callie; Dave, Sonia; Oefner, Ruth; Fitzpatrick, Conor; Pena, Monique; Barron, Jerika J; Ku, Manching; Denli, Ahmet M; Kerman, Bilal E; Charnay, Patrick; Kelsoe, John R; Marchetto, Maria C; Gage, Fred H
2017-06-06
Astrocyte dysfunction and neuroinflammation are detrimental features in multiple pathologies of the CNS. Therefore, the development of methods that produce functional human astrocytes represents an advance in the study of neurological diseases. Here we report an efficient method for inflammation-responsive astrocyte generation from induced pluripotent stem cells (iPSCs) and embryonic stem cells. This protocol uses an intermediate glial progenitor stage and generates functional astrocytes that show levels of glutamate uptake and calcium activation comparable with those observed in human primary astrocytes. Stimulation of stem cell-derived astrocytes with interleukin-1β or tumor necrosis factor α elicits a strong and rapid pro-inflammatory response. RNA-sequencing transcriptome profiling confirmed that similar gene expression changes occurred in iPSC-derived and primary astrocytes upon stimulation with interleukin-1β. This protocol represents an important tool for modeling in-a-dish neurological diseases with an inflammatory component, allowing for the investigation of the role of diseased astrocytes in neuronal degeneration. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Mazurana, Dyan; Benelli, Prisca; Walker, Peter
2013-07-01
Humanitarian aid remains largely driven by anecdote rather than by evidence. The contemporary humanitarian system has significant weaknesses with regard to data collection, analysis, and action at all stages of response to crises involving armed conflict or natural disaster. This paper argues that humanitarian actors can best determine and respond to vulnerabilities and needs if they use sex- and age-disaggregated data (SADD) and gender and generational analyses to help shape their assessments of crises-affected populations. Through case studies, the paper shows how gaps in information on sex and age limit the effectiveness of humanitarian response in all phases of a crisis. The case studies serve to show how proper collection, use, and analysis of SADD enable operational agencies to deliver assistance more effectively and efficiently. The evidence suggests that the employment of SADD and gender and generational analyses assists in saving lives and livelihoods in a crisis. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.
International Nuclear Information System (INIS)
Poudineh, Rahmatallah; Jamasb, Tooraj
2014-01-01
The need for investment in capital intensive electricity networks is on the rise in many countries. A major advantage of distributed resources is their potential for deferring investments in distribution network capacity. However, utilizing the full benefits of these resources requires addressing several technical, economic and regulatory challenges. A significant barrier pertains to the lack of an efficient market mechanism that enables this concept and also is consistent with business model of distribution companies under an unbundled power sector paradigm. This paper proposes a market-oriented approach termed as “contract for deferral scheme” (CDS). The scheme outlines how an economically efficient portfolio of distributed generation, storage, demand response and energy efficiency can be integrated as network resources to reduce the need for grid capacity and defer demand driven network investments. - Highlights: • The paper explores a practical framework for smart electricity distribution grids. • The aim is to defer large capital investments in the network by utilizing and incentivising distributed generation, demand response, energy efficiency and storage as network resources. • The paper discusses a possible new market model that enables integration of distributed resources as alternative to grid capacity enhancement
Impacts of demand response and renewable generation in electricity power market
Zhao, Zhechong
This thesis presents the objective of the research which is to analyze the impacts of uncertain wind power and demand response on power systems operation and power market clearing. First, in order to effectively utilize available wind generation, it is usually given the highest priority by assigning zero or negative energy bidding prices when clearing the day-ahead electric power market. However, when congestion occurs, negative wind bidding prices would aggravate locational marginal prices (LMPs) to be negative in certain locations. A load shifting model is explored to alleviate possible congestions and enhance the utilization of wind generation, by shifting proper amount of load from peak hours to off peaks. The problem is to determine proper amount of load to be shifted, for enhancing the utilization of wind generation, alleviating transmission congestions, and making LMPs to be non-negative values. The second piece of work considered the price-based demand response (DR) program which is a mechanism for electricity consumers to dynamically manage their energy consumption in response to time-varying electricity prices. It encourages consumers to reduce their energy consumption when electricity prices are high, and thereby reduce the peak electricity demand and alleviate the pressure to power systems. However, it brings additional dynamics and new challenges on the real-time supply and demand balance. Specifically, price-sensitive DR load levels are constantly changing in response to dynamic real-time electricity prices, which will impact the economic dispatch (ED) schedule and in turn affect electricity market clearing prices. This thesis adopts two methods for examining the impacts of different DR price elasticity characteristics on the stability performance: a closed-loop iterative simulation method and a non-iterative method based on the contraction mapping theorem. This thesis also analyzes the financial stability of DR load consumers, by incorporating
Shell model the Monte Carlo way
International Nuclear Information System (INIS)
Ormand, W.E.
1995-01-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
International Nuclear Information System (INIS)
Rezaie, Mohammad Reza; Sohrabi, Mehdi; Negarestani, Ali
2013-01-01
The application of CR-39 has been extensive for measurement of radon and progeny in air of dwellings, but limited as regards to measurements of radon in water. In this paper, a new method is introduced for efficient measurement of radon in water by registering alpha particle tracks in a CR-39 detector placed in a non-polar medium such as cyclohexane, hexane and olive oil when each mixed with water, then separated and fixed above water, as a two-phase media. The method introduced here is however different from the widely used liquid - liquid extraction technique by liquid scintillation spectrometry since it is a passive detection method (CR-39) in a non-polar liquid with enhanced absorption of radon in the liquid, it has a capability for long sample counting to decrease the minimum detection limit (MDL), it does not require sophisticated low light counting systems, and it has the potential for simultaneous measurements of large number of samples for large-scale applications. It also has a low cost and is readily available. A new Monte Carlo calculation of energy-distance travelled by alphas from radon and progeny in a medium was also investigated. The sensitivity of CR-39 detector to radon and progeny in water was determined under two conditions; in a single-phase and two-phase media. In a single-phase medium, CR-39 is directly placed either in air, water, cyclohexane, hexane or olive oil. When CR-39 is placed directly in water, its sensitivity is (2.4 ± 0.1) × 10 −4 (track/cm 2 )/(Bq.d/m 3 ). In the two-phase media, CR-39 is placed either in cyclohexane, hexane or olive oil when each is fixed above water. The sensitivities in the two-phase media are significantly enhanced and are respectively (1.98 ± 0.10) × 10 −2 , (2.8 ± 0.15) × 10 −2 and (2.86 ± 0.15) × 10 −2 (track/cm 2 )/(Bq.d/m 3 ). The sensitivies are about 76, 82 and 110 times more than that of when CR-39 is directly placed in water. The new method is a novel alternative for radon
Tyler, Robert
2012-04-01
The tidal flow response and associated dissipative heat generated in a satellite ocean depends strongly on the ocean configuration parameters as these parameters control the form and frequencies of the ocean's natural modes of oscillation; if there is a near match between the form and frequency of one of these natural modes and that of one of the available tidal forcing constituents, the ocean can be resonantly excited, producing strong tidal flow and appreciable dissipative heat. Of primary interest in this study are the ocean parameters that can be expected to evolve (notably, the ocean depth in an ocean attempting to freeze, and the stratification in an ocean attempting to cool) because this evolution can cause an ocean to be pushed into a resonant configuration where the increased dissipative heat of the resonant response halts further evolution and a liquid ocean can be maintained by ocean tidal heat. In this case the resonant ocean tidal response is not only allowed but may be inevitable. Previous work on this topic is extended to describe the resonant configurations in both unstratified and stratified cases for an assumed global ocean on Titan subject to both obliquity and eccentricity tidal forces. Results indicate first that the assumption of an equilibrium tidal response is not justified and the correct dynamical response must be considered. Second, the ocean tidal dissipation will be appreciable if the ocean configuration is near that producing a resonant state. The parameters values required for this resonance are provided in this study, and examples/movies of calculated ocean tidal flow are also presented.
Utsugi, Chizuru; Miyazono, Sadaharu; Osada, Kazumi; Matsuda, Mitsuyoshi; Kashiwayanagi, Makoto
2014-12-01
A large number of neurons are generated at the subventricular zone (SVZ) even during adulthood. In a previous study, we have shown that a reduced mastication impairs both neurogenesis in the SVZ and olfactory functions. Pheromonal signals, which are received by the vomeronasal organ, provide information about reproductive and social states. Vomeronasal sensory neurons project to the accessory olfactory bulb (AOB) located on the dorso-caudal surface of the main olfactory bulb. Newly generated neurons at the SVZ migrate to the AOB and differentiate into granule cells and periglomerular cells. This study aimed to explore the effects of changes in mastication on newly generated neurons and pheromonal responses. Bromodeoxyuridine-immunoreactive (BrdU-ir; a marker of DNA synthesis) and Fos-ir (a marker of neurons excited) structures in sagittal sections of the AOB after exposure to urinary odours were compared between the mice fed soft and hard diets. The density of BrdU-ir cells in the AOB in the soft-diet-fed mice after 1 month was essentially similar to that of the hard-diet-fed mice, while that was lower in the soft-diet-fed mice for 3 or 6 months than in the hard-diet-fed mice. The density of Fos-ir cells in the soft-diet-fed mice after 2 months was essentially similar to that in the hard-diet-fed mice, while that was lower in the soft-diet-fed mice for 4 months than in the hard-diet-fed mice. The present results suggest that impaired mastication reduces newly generated neurons at the AOB, which in turn impairs olfactory function at the AOB. Copyright © 2014 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Anghel, V.N.P. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Comeau, D. [New Brunswick Power Nuclear, Point Lepreau, New Brunswick (Canada); McKay, J.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Taylor, D. [New Brunswick Power Nuclear, Point Lepreau, New Brunswick (Canada)
2009-07-01
CANDU reactors are protected against reactor overpower by two independent shutdown systems: Shut Down System 1 and 2 (SDS1 and SDS2). At the Point Lepreau Generating Station (PLGS), the shutdown systems can be actuated by measurements of the neutron flux by Platinum-clad Inconel In-Core Flux Detectors (ICFDs). These detectors have a complex dynamic behaviour, characterized by 'prompt' and 'delayed' components with respect to immediate changes in the in-core neutron flux. The dynamic response components need to be determined accurately in order to evaluate the effectiveness of the detectors for actuating the shutdown systems. The amplitudes of the prompt and the delayed components of individual detectors were estimated over a period of several years by comparison of archived detector response data with the computed local neutron flux evolution for SDS1 and SDS2 reactor trips. This was achieved by custom-designed algorithms. The results of this analysis show that the dynamic response of the detectors changes with irradiation, with the SDS2 detectors having 'prompt' signal components that decreased significantly with irradiation. Some general conclusions about detector aging effects are also drawn. (author)
International Nuclear Information System (INIS)
Rajabalinejad, M.
2010-01-01
To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.
Karaliolios, P.; Slootweg, J.G.; Kling, W.L.
2010-01-01
Notwithstanding the positive environmental impact, the increasing penetration of Distributed Generation (DG) units connected to the distribution network raises new topics concerning the expected response of these during outages. Grid disturbances especially at the transmission level can cause the
Directory of Open Access Journals (Sweden)
Pedro Medina Avendaño
1981-01-01
Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas
International Nuclear Information System (INIS)
Wollaber, Allan Benton
2016-01-01
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
International Nuclear Information System (INIS)
Creutz, M.
1986-01-01
The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
K.M. Hettne (Kristina); J. Boorsma (Jeffrey); D.A.M. van Dartel (Dorien A M); J.J. Goeman (Jelle); E.C. de Jong (Esther); A.H. Piersma (Aldert); R.H. Stierum (Rob); J. Kleinjans (Jos); J.A. Kors (Jan)
2013-01-01
textabstractBackground: Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with
Hettne, K.M.; Boorsma, A.; Dartel, D.A. van; Goeman, J.J.; Jong, E. de; Piersma, A.H.; Stierum, R.H.; Kleinjans, J.C.; Kors, J.A.
2013-01-01
BACKGROUND: Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with gene set
Hettne, K.M.; Boorsma, A.; Dartel, van D.A.M.; Goeman, J.J.; Jong, de E.; Piersma, A.H.; Stierum, R.H.; Kleinjans, J.C.; Kors, J.A.
2013-01-01
Background: Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with gene set
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
International Nuclear Information System (INIS)
Liao, Y.P.; Wang, C.-C.; McBride, W.H.
2003-01-01
Full text: The human MART-1/Melan-A (MART-1) melanoma tumor antigen is known to be recognized by cytotoxic T lymphocytes (CTLs) and several groups are using this target for clinical immunotherapy. Most approaches use dendritic cells (DCs) that are potent antigen presentation cells for initiating CTL responses. In order for CTL recognition to occur, DCs must display 9-residue antigenic peptides on MHC class I molecules. These peptides are generated by proteasome degradation and then transported through the endoplasmic reticulum to the cell surface where they stabilize MHC class I expression. Our previous data showed that irradiation inhibits proteasome function and, therefore, we hypothesized that irradiation may inhibit antigen processing and CTL activation, as has been shown for proteasome inhibitors. To study the importance of irradiation effects on DCs, we studied the generation MART-1-specific CTL responses. Preliminary data showed that irradiation of murine bone marrow derived DCs did not affect expression of MHC class I, II, CD80, or CD86, as assessed by flow cytometric analyses 24-hour after irradiation. The effect of irradiation on MART-1 antigen processing by DCs was evaluated using DC transduced with adenovirus MART-1 (AdVMART1). C57BL/6 mice were immunized with AdVMART1 transduced DCs, with and without prior irradiation. IFN-γ production was measured by ELISPOT assays after 10-14 days of immunization. Prior radiation treatment resulted in a significant decrease in MART-1-specific T cell responses. The ability of irradiated and non-irradiated AdVMART1/DC vaccines to protect mice against growth of murine B16 tumors, which endogenously express murine MART-1, was also examined. AdVMART1/DC vaccination protected C57BL/6 mice against challenge with viable B16 melanoma cells while DCs irradiated (10 Gy) prior to AdVMART1 transduction abrogated protection. These results suggest that proteasome inhibition in DCs by irradiation may be a possible pathway in
Directory of Open Access Journals (Sweden)
W. Y. Tuet
2017-09-01
Full Text Available Cardiopulmonary health implications resulting from exposure to secondary organic aerosols (SOA, which comprise a significant fraction of ambient particulate matter (PM, have received increasing interest in recent years. In this study, alveolar macrophages were exposed to SOA generated from the photooxidation of biogenic and anthropogenic precursors (isoprene, α-pinene, β-caryophyllene, pentadecane, m-xylene, and naphthalene under different formation conditions (RO2 + HO2 vs. RO2 + NO dominant, dry vs. humid. Various cellular responses were measured, including reactive oxygen and nitrogen species (ROS/RNS production and secreted levels of cytokines, tumor necrosis factor-α (TNF-α and interleukin-6 (IL-6. SOA precursor identity and formation condition affected all measured responses in a hydrocarbon-specific manner. With the exception of naphthalene SOA, cellular responses followed a trend where TNF-α levels reached a plateau with increasing IL-6 levels. ROS/RNS levels were consistent with relative levels of TNF-α and IL-6, due to their respective inflammatory and anti-inflammatory effects. Exposure to naphthalene SOA, whose aromatic-ring-containing products may trigger different cellular pathways, induced higher levels of TNF-α and ROS/RNS than suggested by the trend. Distinct cellular response patterns were identified for hydrocarbons whose photooxidation products shared similar chemical functionalities and structures, which suggests that the chemical structure (carbon chain length and functionalities of photooxidation products may be important for determining cellular effects. A positive nonlinear correlation was also detected between ROS/RNS levels and previously measured DTT (dithiothreitol activities for SOA samples. In the context of ambient samples collected during summer and winter in the greater Atlanta area, all laboratory-generated SOA produced similar or higher levels of ROS/RNS and DTT activities. These results
International Nuclear Information System (INIS)
Lawrence, C.
1989-01-01
Nuclear engineering and the peaceful use of nuclear energy still is a major issue in the dispute about technological progress. There are the two most ambiguous concepts in the nuclear controversy which illustrate the uncertainty in dealing with the 'new technologies': The 'risk to be accepted', and the 'responsibility towards future generations'. The study in hand focusses on the 'risk to be accepted', which from the constitutional point of view still lacks legitimation. The concept of 'social adequacy' used in the Kalkar judgement of the Federal Constitutional Court is based on custom and consensus and today, in view of the lack of consensus, can no longer be used to derive a constitutional legitimation. This gap is filled in this study by examining the applicability of the basic right of physical integrity (Art. 2, section 2, first sentence of the GG). In addition, it is a particular feature of the concept of 'risk to be accepted' that neither the Constitution nor the Atomic Energy Act allow direct limits to the quantitative increase of that risk to be derived from their provisions. However, it is just the need for legal provisions checking and controlling the risk growing with technological progress that creates the major problem in the effort to prevent a possible intrinsic dynamic development of risks. The study investigates whether there are such instruments provided by the law. Another aspect discussed in connection with the safe ultimate disposal of radioactive wastes with half-life periods of up to 24.000 years is the responsibility we have towards the future generations. The author examines whether there are constitutional rights affecting nuclear technology in relation to this topic. (orig./HSCH) [de
International Nuclear Information System (INIS)
Valmikinathan, Chandra M; ChangWei; Xu Jiahua; Yu Xiaojun
2012-01-01
One of the major challenges in the fabrication of tissue engineered scaffolds is the ability of the scaffold to biologically mimic autograft-like tissues. One of the alternate approaches to achieve this is by the application of cell seeded scaffolds with optimal porosity and mechanical properties. However, the current approaches for seeding cells on scaffolds are not optimal in terms of seeding efficiencies, cell penetration into the scaffold and more importantly uniform distribution of cells on the scaffold. Also, recent developments in scaffold geometries to enhance surface areas, pore sizes and porosities tend to further complicate the scenario. Cell sheet-based approaches for cell seeding have demonstrated a successful approach to generate scaffold-free tissue engineering approaches. However, the method of generating the temperature responsive surface is quite challenging and requires carcinogenic reagents and gamma rays. Therefore, here, we have developed temperature responsive substrates by layer-by-layer self assembly of smart polymers. Multilayer thin films prepared from tannic acid and poly N-isopropylacrylamide were fabricated based on their electrostatic and hydrogen bonding interactions. Cell attachment and proliferation studies on these thin films showed uniform cell attachment on the substrate, matching tissue culture plates. Also, the cells could be harvested as cell patches and sheets from the scaffolds, by reducing the temperature for a short period of time, and seeded onto porous scaffolds for tissue engineering applications. An enhanced cell seeding efficiency on scaffolds was observed using the cell patch-based technique as compared to seeding cells in suspension. Owing to the already pre-existent cell–cell and cell–extracellular matrix interactions, the cell patch showed the ability to reattach rapidly onto scaffolds and showed enhanced ability to proliferate and differentiate into a bone-like matrix. (paper)
Synthetic generation of influenza vaccine viruses for rapid response to pandemics.
Dormitzer, Philip R; Suphaphiphat, Pirada; Gibson, Daniel G; Wentworth, David E; Stockwell, Timothy B; Algire, Mikkel A; Alperovich, Nina; Barro, Mario; Brown, David M; Craig, Stewart; Dattilo, Brian M; Denisova, Evgeniya A; De Souza, Ivna; Eickmann, Markus; Dugan, Vivien G; Ferrari, Annette; Gomila, Raul C; Han, Liqun; Judge, Casey; Mane, Sarthak; Matrosovich, Mikhail; Merryman, Chuck; Palladino, Giuseppe; Palmer, Gene A; Spencer, Terika; Strecker, Thomas; Trusheim, Heidi; Uhlendorff, Jennifer; Wen, Yingxia; Yee, Anthony C; Zaveri, Jayshree; Zhou, Bin; Becker, Stephan; Donabedian, Armen; Mason, Peter W; Glass, John I; Rappuoli, Rino; Venter, J Craig
2013-05-15
During the 2009 H1N1 influenza pandemic, vaccines for the virus became available in large quantities only after human infections peaked. To accelerate vaccine availability for future pandemics, we developed a synthetic approach that very rapidly generated vaccine viruses from sequence data. Beginning with hemagglutinin (HA) and neuraminidase (NA) gene sequences, we combined an enzymatic, cell-free gene assembly technique with enzymatic error correction to allow rapid, accurate gene synthesis. We then used these synthetic HA and NA genes to transfect Madin-Darby canine kidney (MDCK) cells that were qualified for vaccine manufacture with viral RNA expression constructs encoding HA and NA and plasmid DNAs encoding viral backbone genes. Viruses for use in vaccines were rescued from these MDCK cells. We performed this rescue with improved vaccine virus backbones, increasing the yield of the essential vaccine antigen, HA. Generation of synthetic vaccine seeds, together with more efficient vaccine release assays, would accelerate responses to influenza pandemics through a system of instantaneous electronic data exchange followed by real-time, geographically dispersed vaccine production.
International Nuclear Information System (INIS)
Dias, V.
2010-01-01
The intensity of selection on populations caused by polluted environment often exceeds which is caused by an unpolluted environment. Therefore, micro evolution can occur in response to this anthropic-directional force over a short period. In this context, this thesis focused on studying phenotypic changes in Chironomus riparius populations exposed during several consecutive generations to uranium-contaminated sediments. In laboratory-controlled conditions experiments were conducted with same origin populations exposed to a range of uranium concentration inducing toxic effects. Over eight-generations of exposure, life-history traits measures revealed micro evolution in exposed populations, including increase of adult reproductive success. Other experiments (acute toxicity test, common garden experiment) performed in parallel enabled to link these micro evolution with a tolerance induction, as a consequence of genetic adaptation. Nonetheless this adaptation also induced cost in terms of fitness and genetic diversity for pre-exposed populations. These results lead to the hypothesis of a selection by uranium that acted sequentially on populations. They also underline the need to better-understand the adaptive mechanisms to better assess the ecological consequences of chronic exposure of populations to a pollutant. (author)
International Nuclear Information System (INIS)
Kamphuis, I.G.; Hommelberg, M.P.F.; Warmer, C.J.; Kok, J.K.
2007-01-01
Different driving forces push the electricity production towards decentralization. The projected increase of distributed power generation on the residential level with an increasing proportion of intermittent renewable energy resources poses problems for continuously matching the energy balance when coordination takes place centrally. On the other hand, new opportunities arise by intelligent clustering of generators and demand in so-called Virtual Power Plants. Part of the responsibility for new coordination mechanisms, then, has to be laid locally. To achieve this, the current electricity infrastructure is expected to evolve into a network of networks (including ICT (Information and Communication Technology)-networks), in which all system parts communicate with one another, are aware of each other's context and may influence each other. In this paper, a multi-agent systems approach, using price signal-vectors from an electronic market is presented as an appropriate technology needed for massive control and coordination tasks in these future electricity networks. The PowerMatcher, a market-based control concept for supply and demand matching (SDM) in electricity networks, is discussed. The results within a simulation study show the ability to raise the simultaneousness of electricity production and consumption within (local) control clusters with cogeneration and heat-pumps by exchanging price signals and coordinated allocation using market algorithms. The control concept, however, can also be applied in other business cases like reduction of imbalance cost in commercial portfolios or virtual power plant operators, utilizing distributed generators. Furthermore, a PowerMatcher-based field test configuration with 15 Stirling-engine powered micro-CHP's is described, which is currently in operation within a field test in the Netherlands
He, Guochun; Tsutsumi, Tomoaki; Zhao, Bin; Baston, David S; Zhao, Jing; Heath-Pagliuso, Sharon; Denison, Michael S
2011-10-01
2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD, dioxin) and related dioxin-like chemicals are widespread and persistent environmental contaminants that produce diverse toxic and biological effects through their ability to bind to and activate the Ah receptor (AhR) and AhR-dependent gene expression. The chemically activated luciferase expression (CALUX) system is an AhR-responsive recombinant luciferase reporter gene-based cell bioassay that has been used in combination with chemical extraction and cleanup methods for the relatively rapid and inexpensive detection and relative quantitation of dioxin and dioxin-like chemicals in a wide variety of sample matrices. Although the CALUX bioassay has been validated and used extensively for screening purposes, it has some limitations when screening samples with very low levels of dioxin-like chemicals or when there is only a small amount of sample matrix for analysis. Here, we describe the development of third-generation (G3) CALUX plasmids with increased numbers of dioxin-responsive elements, and stable transfection of these new plasmids into mouse hepatoma (Hepa1c1c7) cells has produced novel amplified G3 CALUX cell bioassays that respond to TCDD with a dramatically increased magnitude of luciferase induction and significantly lower minimal detection limit than existing CALUX-type cell lines. The new G3 CALUX cell lines provide a highly responsive and sensitive bioassay system for the detection and relative quantitation of very low levels of dioxin-like chemicals in sample extracts.
International Nuclear Information System (INIS)
SUBUDHI, M.; SULLIVAN, JR. E.J.
2002-01-01
THIS PAPER PRESENTS THE RESULTS OF AN AGING ASSESSMENT OF THE NUCLEAR POWER INDUSTRY RESPONSES TO NRC GENERIC LETTER 97-06 ON THE DEGRADATION OF STEAM GENERATOR INTERNALS EXPERIENCED AT ELECTRICITE DE FRANCE (EDF) PLANTS IN FRANCE AND AT A UNITED STATES PRESSURIZED WATER REACTOR (PWR). WESTINGHOUSE (W), COMBUSTION ENGINEERING (CE), AND BABCOCK AND WILCOX (BW) STEAM GENERATOR MODELS, CURRENTLY IN SERVICE AT U.S. NUCLEAR POWER PLANTS, POTENTIALLY COULD EXPERIENCE DEGRADATION SIMILAR TO THATFOUND AT EDF PLANTS AND THE U.S. PLANT. THE STEAM GENERATORS IN MANY OF THE U.S. PWRS HAVE BEEN REPLACED WITH STEAM GENERATORS WITH STEAM GENERATORS WITH IMPROVED DESIGNS AND MATERIALS. THESE REPLACEMENT STEAM GENERATORS HAVE BEEN MANUFACTURED IN THE U.S. AND ABROAD. DURING THIS ASSESSMENT, EACH OF THE THREE OWNERS GROUPS (W,CE, AND BW) IDENTIFIED FOR ITS STEAM GENERATOR, MODELS ALL THE POTENTIAL INTERNAL COMPONENTS THAT ARE VULNERABLE TO DEGRADATION WHILE IN SERVICE. EACH OWNERS GROUPDEVELOPED INSPEC TION AND MONITORING GUIDANCE AND RECOMMENDATIONS FOR ITS PARTICULAR STEAM GENERATOR MODELS. THE NUCLEAR ENERGY INSTITUTE INCORPORATED IN NEI 97-06 STEAM GENERATOR PROGRAM GUIDELINES, A REQUIREMENT TO MONITOR SECONDARY SIDE STEAM GENERATOR COMPONENTS IF THEIR FAILURE COULD PREVENT THE STEAM GENERATOR FROM FULFILLING ITS INTENDED SAFETY-RELATED FUNCTION. LICENSEES INDICATED THAT THEY IMPLEMENTED OR PLANNED TO IMPLEMENT, AS APPROPRIATE FOR THEIR STEAM GENERATORS, THEIR OWNERS GROUPRECOMMENDATIONS TO ADDRESS THE LONG-TERM EFFECTS OF THE POTENTIAL DEGRADATION MECHANISMS ASSOCIATED WITH THE STEAM GENERATOR INTERNALS
Directory of Open Access Journals (Sweden)
José Martínez Villavicencio
2015-11-01
Full Text Available En los últimos años, las pequeñas, medianas y grandes empresas de Costa Rica han intensificado su interés en la Responsabilidad Social Empresarial (RSE, debido a que cada vez hay mayor urgencia para la sostenibilidad y sobrevivencia de la sociedad, así como que la actividad comercial asuma acciones de manera responsable e integral para el desarrollo del país. Este estudio tiene por objetivo identificar los factores impulsores de la RSE, identificados, específicamente, como “consumidores”, “proveedores”, “comunidad”, “medio ambiente”, “competitividad” y “financiamiento”, para formular modelos de gestión e indicadores de medición que puedan ejercer influencia y facilitar que una empresa sea responsable. Para esto se realizó un estudio de casos con dieciséis empresas hoteleras de la Fortuna de San Carlos y se aplicó una metodología cualitativa mediante una entrevista semiestructurada. Dentro de los resultados más importantes se encontró, como principal impulsor de RSE, a los “consumidores” y, en segunda instancia, al “ambiente”, mientras que se logró identificar que “proveedores” y “comunidad” no tienen mayor efecto como impulsores de RSE, información que permite apoyar la creación de políticas sociales y sostenibilidad turística a nivel regional. Abstract: In the last years, small, medium and big companies from Costa Rica have intensified their interest in Corporate Social Responsibility (CSR. Given the sustainability and survival growing urgency in modern society, commercial activity assumes responsible and integral actions on the country development. The aim of this article is to identify the driving factors of CSR, which are, specifically, consumers, suppliers, community, environment, competitiveness and financing, to formulate management models and measurement indicators that could be used to influence and ease the responsible task of a
Alam, M Samiul; Costales, Matthew G; Cavanaugh, Christopher; Williams, Kristina
2015-05-05
Adenosine, an immunomodulatory biomolecule, is produced by the ecto-enzymes CD39 (nucleoside triphosphate dephosphorylase) and CD73 (ecto-5'-nucleotidase) by dephosphorylation of extracellular ATP. CD73 is expressed by many cell types during injury, infection and during steady-state conditions. Besides host cells, many bacteria also have CD39-CD73-like machinery, which helps the pathogen subvert the host inflammatory response. The major function for adenosine is anti-inflammatory, and most recent research has focused on adenosine's control of inflammatory mechanisms underlying various autoimmune diseases (e.g., colitis, arthritis). Although adenosine generated through CD73 provides a feedback to control tissue damage mediated by a host immune response, it can also contribute to immunosuppression. Thus, inflammation can be a double-edged sword: it may harm the host but eventually helps by killing the invading pathogen. The role of adenosine in dampening inflammation has been an area of active research, but the relevance of the CD39/CD73-axis and adenosine receptor signaling in host defense against infection has received less attention. Here, we review our recent knowledge regarding CD73 expression during murine Salmonellosis and Helicobacter-induced gastric infection and its role in disease pathogenesis and bacterial persistence. We also explored a possible role for the CD73/adenosine pathway in regulating innate host defense function during infection.
A Monte Carlo method using octree structure in photon and electron transport
International Nuclear Information System (INIS)
Ogawa, K.; Maeda, S.
1995-01-01
Most of the early Monte Carlo calculations in medical physics were used to calculate absorbed dose distributions, and detector responses and efficiencies. Recently, data acquisition in Single Photon Emission CT (SPECT) has been simulated by a Monte Carlo method to evaluate scatter photons generated in a human body and a collimator. Monte Carlo simulations in SPECT data acquisition are generally based on the transport of photons only because the photons being simulated are low energy, and therefore the bremsstrahlung productions by the electrons generated are negligible. Since the transport calculation of photons without electrons is much simpler than that with electrons, it is possible to accomplish the high-speed simulation in a simple object with one medium. Here, object description is important in performing the photon and/or electron transport using a Monte Carlo method efficiently. The authors propose a new description method using an octree representation of an object. Thus even if the boundaries of each medium are represented accurately, high-speed calculation of photon transport can be accomplished because the number of voxels is much fewer than that of the voxel-based approach which represents an object by a union of the voxels of the same size. This Monte Carlo code using the octree representation of an object first establishes the simulation geometry by reading octree string, which is produced by forming an octree structure from a set of serial sections for the object before the simulation; then it transports photons in the geometry. Using the code, if the user just prepares a set of serial sections for the object in which he or she wants to simulate photon trajectories, he or she can perform the simulation automatically using the suboptimal geometry simplified by the octree representation without forming the optimal geometry by handwriting
Isabettini, Stéphane; Baumgartner, Mirjam E; Reckey, Pernille Q; Kohlbrecher, Joachim; Ishikawa, Takashi; Fischer, Peter; Windhab, Erich J; Kuster, Simon
2017-06-27
Mixtures of 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC) and its lanthanide ion (Ln 3+ ) chelating phospholipid conjugate, 1,2-dimyristoyl-sn-glycero-3-phospho-ethanolamine-diethylene triaminepentaacetate (DMPE-DTPA), assemble into highly magnetically responsive polymolecular assemblies such as DMPC/DMPE-DTPA/Ln 3+ (molar ratio 4:1:1) bicelles. Their geometry and magnetic alignability is enhanced by introducing cholesterol into the bilayer in DMPC/Cholesterol/DMPE-DTPA/Ln 3+ (molar ratio 16:4:5:5). However, the reported fabrication procedures remain tedious and limit the generation of highly magnetically alignable species. Herein, a simplified procedure where freeze thawing cycles and extrusion are replaced by gentle heating and cooling cycles for the hydration of the dry lipid film was developed. Heating above the phase transition temperature T m of the lipids composing the bilayer before cooling back below the T m was essential to guarantee successful formation of the polymolecular assemblies composed of DMPC/DMPE-DTPA/Ln 3+ (molar ratio 4:1:1). Planar polymolecular assemblies in the size range of hundreds of nanometers are achieved and deliver unprecedented gains in magnetic response. The proposed heating and cooling procedure further allowed to regenerate the highly magnetically alignable DMPC/Cholesterol/DMPE-DTPA/Ln 3+ (molar ratio 16:4:5:5) species after storage for one month frozen at -18 °C. The simplicity and viability of the proposed fabrication procedure offers a new set of highly magnetically responsive lanthanide ion chelating phospholipid polymolecular assemblies as building blocks for the smart soft materials of tomorrow.
Introduction to the Monte Carlo methods
International Nuclear Information System (INIS)
Uzhinskij, V.V.
1993-01-01
Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab
DEFF Research Database (Denmark)
Seeber, Isabella; Merz, Alexander B.; Maier, Ronald
2017-01-01
engage in social exchange processes to converge on a few promising ideas. Traditionally, teams work on self-generated ideas. However, in a crowdsourcing scenario, such as public participation in crisis response, teams may have to process crowd-generated ideas. To better understand this new practice......Social media allow crowds to generate many ideas to swiftly respond to events like crises, public policy discourse, or online town hall meetings. This allows organizations and governments to harness the innovative power of the crowd. As part of this setting, teams that process crowd ideas must......, it is important to investigate how converging on crowdsourced ideas affects the social exchange processes of teams and resulting outcomes. We conducted a laboratory experiment in which small teams working in a crisis response setting converged on self-generated or crowdsourced ideas in an emergency response...
Rühm, W; Pioch, C; Agosteo, S; Endo, A; Ferrarini, M; Rakhno, I; Rollet, S; Satoh, D; Vincke, H
2014-01-01
Bonner Spheres Spectrometry in its high-energy extended version is an established method to quantify neutrons at a wide energy range from several meV up to more than 1 GeV. In order to allow for quantitative measurements, the responses of the various spheres used in a Bonner Sphere Spectrometer (BSS) are usually simulated by Monte Carlo (MC) codes over the neutron energy range of interest. Because above 20 MeV experimental cross section data are scarce, intra-nuclear cascade (INC) and evaporation models are applied in these MC codes. It was suspected that this lack of data above 20 MeV may translate to differences in simulated BSS response functions depending on the MC code and nuclear models used, which in turn may add to the uncertainty involved in Bonner Sphere Spectrometry, in particular for neutron energies above 20 MeV. In order to investigate this issue in a systematic way, EURADOS (European Radiation Dosimetry Group) initiated an exercise where six groups having experience in neutron transport calcula...
Computer system for Monte Carlo experimentation
International Nuclear Information System (INIS)
Grier, D.A.
1986-01-01
A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language
International Nuclear Information System (INIS)
Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.
2009-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)
Defining the next generation modeling of coastal ecotone dynamics in response to global change
Jiang, Jiang; DeAngelis, Donald L.; Teh, Su-Y; Krauss, Ken W.; Wang, Hongqing; Haidong, Li; Smith, Thomas; Koh, Hock L.
2016-01-01
Coastal ecosystems are especially vulnerable to global change; e.g., sea level rise (SLR) and extreme events. Over the past century, global change has resulted in salt-tolerant (halophytic) plant species migrating into upland salt-intolerant (glycophytic) dominated habitats along major rivers and large wetland expanses along the coast. While habitat transitions can be abrupt, modeling the specific drivers of abrupt change between halophytic and glycophytic vegetation is not a simple task. Correlative studies, which dominate the literature, are unlikely to establish ultimate causation for habitat shifts, and do not generate strong predictive capacity for coastal land managers and climate change adaptation exercises. In this paper, we first review possible drivers of ecotone shifts for coastal wetlands, our understanding of which has expanded rapidly in recent years. Any exogenous factor that increases growth or establishment of halophytic species will favor the ecotone boundary moving upslope. However, internal feedbacks between vegetation and the environment, through which vegetation modifies the local microhabitat (e.g., by changing salinity or surface elevation), can either help the system become resilient to future changes or strengthen ecotone migration. Following this idea, we review a succession of models that have provided progressively better insight into the relative importance of internal positive feedbacks versus external environmental factors. We end with developing a theoretical model to show that both abrupt environmental gradients and internal positive feedbacks can generate the sharp ecotonal boundaries that we commonly see, and we demonstrate that the responses to gradual global change (e.g., SLR) can be quite diverse.
Development and application of the automated Monte Carlo biasing procedure in SAS4
International Nuclear Information System (INIS)
Tang, J.S.; Broadhead, B.L.
1995-01-01
An automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete-ordinates calculation are used to generate biasing parameters for a three-dimensional Monte Carlo calculation. The automated procedure consisting of cross-section processing, adjoint flux determination, biasing parameter generation, and the initiation of a MORSE-SGC/S Monte Carlo calculation has been implemented in the SAS4 module of the SCALE computer code system. (author)
MCNP-REN: a Monte Carlo tool for neutron detector design
International Nuclear Information System (INIS)
Abhold, M.E.; Baker, M.C.
2002-01-01
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel were taken with the Underwater Coincidence Counter, and measurements of highly enriched uranium reactor fuel were taken with the active neutron interrogation Research Reactor Fuel Counter and compared to calculation. Simulations completed for other detector design applications are described. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions
International Nuclear Information System (INIS)
Eblin, K.E.; Bowen, M.E.; Cromey, D.W.; Bredfeldt, T.G.; Mash, E.A.; Lau, S.S.; Gandolfi, A.J.
2006-01-01
Arsenicals have commonly been seen to induce reactive oxygen species (ROS) which can lead to DNA damage and oxidative stress. At low levels, arsenicals still induce the formation of ROS, leading to DNA damage and protein alterations. UROtsa cells, an immortalized human urothelial cell line, were used to study the effects of arsenicals on the human bladder, a site of arsenical bioconcentration and carcinogenesis. Biotransformation of As(III) by UROtsa cells has been shown to produce methylated species, namely monomethylarsonous acid [MMA(III)], which has been shown to be 20 times more cytotoxic. Confocal fluorescence images of UROtsa cells treated with arsenicals and the ROS sensing probe, DCFDA, showed an increase of intracellular ROS within five min after 1 μM and 10 μM As(III) treatments. In contrast, 50 and 500 nM MMA(III) required pretreatment for 30 min before inducing ROS. The increase in ROS was ameliorated by preincubation with either SOD or catalase. An interesting aspect of these ROS detection studies is the noticeable difference between concentrations of As(III) and MMA(III) used, further supporting the increased cytotoxicity of MMA(III), as well as the increased amount of time required for MMA(III) to cause oxidative stress. These arsenical-induced ROS produced oxidative DNA damage as evidenced by an increase in 8-hydroxyl-2'-deoxyguanosine (8-oxo-dG) with either 50 nM or 5 μM MMA(III) exposure. These findings provide support that MMA(III) cause a genotoxic response upon generation of ROS. Both As(III) and MMA(III) were also able to induce Hsp70 and MT protein levels above control, showing that the cells recognize the ROS and respond. As(III) rapidly induces the formation of ROS, possibly through it oxidation to As(V) and further metabolism to MMA(III)/(V). These studies provide evidence for a different mechanism of MMA(III) toxicity, one that MMA(III) first interacts with cellular components before an ROS response is generated, taking longer to
He, Guochun; Tsutsumi, Tomoaki; Zhao, Bin; Baston, David S.; Zhao, Jing; Heath-Pagliuso, Sharon; Denison, Michael S.
2011-01-01
2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD, dioxin) and related dioxin-like chemicals are widespread and persistent environmental contaminants that produce diverse toxic and biological effects through their ability to bind to and activate the Ah receptor (AhR) and AhR-dependent gene expression. The chemically activated luciferase expression (CALUX) system is an AhR-responsive recombinant luciferase reporter gene–based cell bioassay that has been used in combination with chemical extraction and cleanup methods for the relatively rapid and inexpensive detection and relative quantitation of dioxin and dioxin-like chemicals in a wide variety of sample matrices. Although the CALUX bioassay has been validated and used extensively for screening purposes, it has some limitations when screening samples with very low levels of dioxin-like chemicals or when there is only a small amount of sample matrix for analysis. Here, we describe the development of third-generation (G3) CALUX plasmids with increased numbers of dioxin-responsive elements, and stable transfection of these new plasmids into mouse hepatoma (Hepa1c1c7) cells has produced novel amplified G3 CALUX cell bioassays that respond to TCDD with a dramatically increased magnitude of luciferase induction and significantly lower minimal detection limit than existing CALUX-type cell lines. The new G3 CALUX cell lines provide a highly responsive and sensitive bioassay system for the detection and relative quantitation of very low levels of dioxin-like chemicals in sample extracts. PMID:21775728
Monte Carlo simulation of the HEGRA cosmic ray detector performance
Energy Technology Data Exchange (ETDEWEB)
Martinez, S. [Universidad Complutense de Madrid (Spain). Dept. de Fisica Atomica, Molecular y Nuclear; Arqueros, F. [Universidad Complutense de Madrid (Spain). Dept. de Fisica Atomica, Molecular y Nuclear; Fonseca, V. [Universidad Complutense de Madrid (Spain). Dept. de Fisica Atomica, Molecular y Nuclear; Karle, A. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D80805 Munich (Germany); Lorenz, E. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D80805 Munich (Germany); Plaga, R. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D80805 Munich (Germany); Rozanska, M. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, D80805 Munich (Germany)]|[Institute of Nuclear Physics, ul.Kawiory 26a, PL30-055 Cracow (Poland)
1995-04-21
Models of the scintillator and wide-angle air Cherenkov (AIROBICC) arrays of the HEGRA experiment are described here. Their response to extensive air showers generated by cosmic rays in the 10 to 1000 TeV range has been assessed using a detailed Monte Carlo simulation of air shower development and associated Cherenkov emission. Protons, {gamma}-rays and oxygen and iron nuclei have been considered as primary particles. For both arrays, the angular resolution as determined from the Monte Carlo simulation is compared with experimental data. Shower size N{sub e} can be reconstructed from the scintillator signals with an error ranging from 10% (N{sub e}=2x10{sup 5}) to 35% (N{sub e}=3x10{sup 3}). The energy threshold of AIROBICC is 14 TeV for primary gammas and 27 TeV for protons and an angular resolution of 0.25 can be obtained. The measurement of the Cherenkov light at 90 m from the shower core provides an accurate determination of primary energy E{sub 0} as far as the nature of the primary particle is known. For gammas an error in the energy prediction ranging from 8% (E{sub 0}=5x10{sup 14} eV) to 15% (E{sub 0}=2x10{sup 13} eV) is achieved. This detector is therefore a powerful tool for {gamma}-ray astronomy. ((orig.)).
Directory of Open Access Journals (Sweden)
Azzouz I
2013-12-01
Full Text Available Inès Azzouz, Hamdi Trabelsi, Amel Hanini, Soumaya Ferchichi, Olfa Tebourbi, Mohsen Sakly, Hafedh AbdelmelekLaboratory of Integrative Physiology, Faculty of Sciences of Bizerte, Carthage University, TunisiaAbstract: The aim of the present study was to investigate the interaction of zinc chloride (3 mg/kg, intraperitoneally [ip] in rat liver in terms of the biosynthesis of nanoparticles. Zinc treatment increased zinc content in rat liver. Analysis of fluorescence revealed the presence of red fluorescence in the liver following zinc treatment. Interestingly, the co-exposure to zinc (3 mg/kg, ip and selenium (0.20 mg/L, per os [by mouth] led to a higher intensity of red fluorescence compared to zinc-treated rats. In addition, X-ray diffraction measurements carried out on liver fractions of zinc-treated rats point to the biosynthesis of zinc sulfide and/or selenide nanocomplexes at nearly 51.60 nm in size. Moreover, co-exposure led to nanocomplexes of about 72.60 nm in size. The interaction of zinc with other mineral elements (S, Se generates several nanocomplexes, such as ZnS and/or ZnSe. The nanocomplex ZnX could interact directly with enzyme activity or indirectly by the disruption of mineral elements' bioavailability in cells. Subacute zinc or selenium treatment decreased malondialdehyde levels, indicating a drop in lipid peroxidation. In addition, antioxidant enzyme assays showed that treatment with zinc or co-treatment with zinc and selenium increased the activities of glutathione peroxidase, catalase, and superoxide dismutase. Consequently, zinc complexation with sulfur and/or selenium at nanoscale level could enhance antioxidative responses, which is correlated to the ratio of number of ZnX nanoparticles (X=sulfur or X=selenium to malondialdehyde level in rat liver.Keywords: nanocomplexes biosynthesis, antioxidative responses, X-ray diffraction, fluorescence microscopy, liver
The effect of demand response on purchase intention of distributed generation: Evidence from Japan
International Nuclear Information System (INIS)
Nakada, Tatsuhiro; Shin, Kongjoo; Managi, Shunsuke
2016-01-01
Participation in demand response (DR) may affect a consumer's electric consumption pattern through consumption load curtailment, a shift in the consumption timing or increasing the utilization of distributed generation (DG). This paper attempts to provide empirical evidence of DR's effect on DG adoption by household consumers. By using the original Internet survey data of 5442 household respondents in Japan conducted in January 2015, we focus on the effect of the time-of-use (TOU) tariff on the purchasing intention of photovoltaic systems (PV). The empirical results show the following: 1) current TOU plan users have stronger PV purchase intentions than the other plan users, 2) respondents who are familiar with the DR program have relatively higher purchase intentions compared with their counterparts, and 3) when the respondents are requested to assume participation in the virtual TOU plan designed for the survey, which resembles plans currently available through major companies, 1.2% of the households have decided to purchase PV. In addition, we provide calculations of TOU's impacts on the official PV adoption and emissions reduction targets, and discuss policy recommendations to increase recognitions and participations in TOU programs. - Highlights: •Studies the effect of demand response on purchase intention of PV. •Uses originally collected Internet Japanese household survey data in 2015. •Finds that time-of-use (TOU) plan has positive effect on PV purchase intentions. •Calculates latent TOU impacts on PV installations and emissions reduction targets. •Discusses policy recommendations to increase participations in TOU programs.
Studies of Monte Carlo Modelling of Jets at ATLAS
Kar, Deepak; The ATLAS collaboration
2017-01-01
The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets. Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.
Timm, Jana; Schönwiesner, Marc; Schröger, Erich; SanMiguel, Iria
2016-07-01
Stimuli caused by our own movements are given special treatment in the brain. Self-generated sounds evoke a smaller brain response than externally generated ones. This attenuated response may reflect a predictive mechanism to differentiate the sensory consequences of one's own actions from other sensory input. It may also relate to the feeling of being the agent of the movement and its effects, but little is known about how sensory suppression of brain responses to self-generated sounds is related to judgments of agency. To address this question, we recorded event-related potentials in response to sounds initiated by button presses. In one condition, participants perceived agency over the production of the sounds, whereas in another condition, participants experience an illusory lack of agency caused by changes in the delay between actions and effects. We compared trials in which the timing of button press and sound was physically identical, but participants' agency judgment differed. Results show reduced amplitudes of the auditory N1 component in response to self-generated sounds irrespective of agency experience, whilst P2 effects correlate with the perception of agency. Our findings suggest that suppression of the auditory N1 component to self-generated sounds does not depend on adaptation to specific action-effect time delays, and does not determine agency judgments, however, the suppression of the P2 component might relate more directly to the experience of agency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jet Engine Fan Response to Inlet Distortions Generated by Ingesting Boundary Layer Flow
Giuliani, James Edward
Future civil transport designs may incorporate engines integrated into the body of the aircraft to take advantage of efficiency increases due to weight and drag reduction. Additional increases in engine efficiency are predicted if the inlets ingest the lower momentum boundary layer flow that develops along the surface of the aircraft. Previous studies have shown, however, that the efficiency benefits of Boundary Layer Ingesting (BLI) inlets are very sensitive to the magnitude of fan and duct losses, and blade structural response to the non-uniform flow field that results from a BLI inlet has not been studied in-depth. This project represents an effort to extend the modeling capabilities of TURBO, an existing rotating turbomachinery unsteady analysis code, to include the ability to solve the external and internal flow fields of a BLI inlet. The TURBO code has been a successful tool in evaluating fan response to flow distortions for traditional engine/inlet integrations. Extending TURBO to simulate the external and inlet flow field upstream of the fan will allow accurate pressure distortions that result from BLI inlet configurations to be computed and used to analyze fan aerodynamics and structural response. To validate the modifications for the BLI inlet flow field, an experimental NASA project to study flush-mounted S-duct inlets with large amounts of boundary layer ingestion was modeled. Results for the flow upstream and in the inlet are presented and compared to experimental data for several high Reynolds number flows to validate the modifications to the solver. Once the inlet modifications were validated, a hypothetical compressor fan was connected to the inlet, matching the inlet operating conditions so that the effect on the distortion could be evaluated. Although the total pressure distortion upstream of the fan was symmetrical for this geometry, the pressure rise generated by the fan blades was not, because of the velocity non-uniformity of the distortion
ATLAS Monte Carlo tunes for MC09
The ATLAS collaboration
2010-01-01
This note describes the ATLAS tunes of underlying event and minimum bias description for the main Monte Carlo generators used in the MC09 production. For the main shower generators, pythia and herwig (with jimmy), the MRST LO* parton distribution functions (PDFs) were used for the first time in ATLAS. Special studies on the performance of these, conceptually new, PDFs for high pt physics processes at LHC energies are presented. In addition, a tune of jimmy for CTEQ6.6 is presented, for use with MC@NLO.
MBR Monte Carlo Simulation in PYTHIA8
Ciesielski, R.
We present the MBR (Minimum Bias Rockefeller) Monte Carlo simulation of (anti)proton-proton interactions and its implementation in the PYTHIA8 event generator. We discuss the total, elastic, and total-inelastic cross sections, and three contributions from diffraction dissociation processes that contribute to the latter: single diffraction, double diffraction, and central diffraction or double-Pomeron exchange. The event generation follows a renormalized-Regge-theory model, successfully tested using CDF data. Based on the MBR-enhanced PYTHIA8 simulation, we present cross-section predictions for the LHC and beyond, up to collision energies of 50 TeV.
Subtil Lacerda, J.; van den Bergh, J.C.J.M.
2016-01-01
Policies to assure combatting climate change and realising energy security have stimulated a rapid growth in global installed capacity of renewable energy generation. The expansion of power generation from renewables, though, has so far lagged behind the growth in generation capacity. This indicates
International Nuclear Information System (INIS)
Zhang, Ning; Hu, Zhaoguang; Springer, Cecilia; Li, Yanning; Shen, Bo
2016-01-01
Highlights: • We put forward a novel bi-level integrated power system planning model. • Generation expansion planning and transmission expansion planning are combined. • The effects of two sorts of demand response in reducing peak load are considered. • Operation simulation is conducted to reflect the actual effects of demand response. • The interactions between the two levels can guarantee a reasonably optimal result. - Abstract: If all the resources in power supply side, transmission part, and power demand side are considered together, the optimal expansion scheme from the perspective of the whole system can be achieved. In this paper, generation expansion planning and transmission expansion planning are combined into one model. Moreover, the effects of demand response in reducing peak load are taken into account in the planning model, which can cut back the generation expansion capacity and transmission expansion capacity. Existing approaches to considering demand response for planning tend to overestimate the impacts of demand response on peak load reduction. These approaches usually focus on power reduction at the moment of peak load without considering the situations in which load demand at another moment may unexpectedly become the new peak load due to demand response. These situations are analyzed in this paper. Accordingly, a novel approach to incorporating demand response in a planning model is proposed. A modified unit commitment model with demand response is utilized. The planning model is thereby a bi-level model with interactions between generation-transmission expansion planning and operation simulation to reflect the actual effects of demand response and find the reasonably optimal planning result.
International Nuclear Information System (INIS)
Evans, L.S.
1981-01-01
Experiments were performed with several plant species in natural environments as well in a greenhouse and/or tissue culture facilities to establish dose-response functions of plant responses to simulated acidic rain in order to determine environmental risk assessments to ambient levels of acidic rain. Response functions of foliar injury, biomass of leaves and seed of soybean and pinto beans, root yields of radishes and garden beets, and reproduction of bracken fern are considered. The dose-response function of soybean seed yields with the hydrogen ion concentration of simulated acidic rainfalls was expressed by the equation y = 21.06-1.01 log x where y = seed yield in grams per plant and x = the hydrogen concentration if μeq l -1 . The correlation coefficient of this relationship was -0.90. A similar dose-response function was generated for percent fertilization of ferns in a forest understory. When percent fertilization is plotted on logarithmic scale with hydrogen ion concentration of the simulated rain solution, the Y intercept is 51.18, slope -0.041 with a correlation coefficient of -0.98. Other dose-response functions were generated that assist in a general knowledge as to which plant species and which physiological processes are most impacted by acidic precipitation. Some responses did not produce convenient dose-response relationships. In such cases the responses may be altered by other environmental factors or there may be no differences among treatment means
Monte Carlo simulation models of breeding-population advancement.
J.N. King; G.R. Johnson
1993-01-01
Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...
No-compromise reptation quantum Monte Carlo
International Nuclear Information System (INIS)
Yuen, W K; Farrar, Thomas J; Rothstein, Stuart M
2007-01-01
Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)
Directory of Open Access Journals (Sweden)
Carlos Perez Rostro
2012-11-01
Full Text Available The crayfish Procambarus (A. acanthophorus is a crustacean relevant for regional fisheries in Veracruz, Mexico, with ideal aquaculture characteristics, except for its small size. Thus, a study was conducted with the aim to evaluate the response to selection in the first generation (F1 and heritability (h2 of the crayfish. A group of 2135 organisms with average weight (Â±S.D. 4.1 Â± 1.79 g were captured from the wild (G0, and 10 % (i = 1,755 of the population was selected with the highest body weight by gender: 140 females (5.62 Â± 1.97 g and 48 males (6.02 Â± 1.9 g, forming the progenitors of the selection line (LS. The control line (LC was formed from a batch obtained at random. Thirty full-sib families were obtained per line (F1, and cultured individually for five months in a recirculation system with mechanical and biological filtration under laboratory conditions and supplied with food twice a day (Camaronina 35 % protein. Monthly heritability (h2 in broad sense was estimated using a full-sib design, based on the components of variance (ANOVA REML method and the growth was compared between lines in the F1. The mean h2's for weight after five months of culture were 0.27Â±0.11 for LC and 0.34Â±0.12 for LS, being the LS in F1 9.6 % heavier than the LC, with 84 and 88 % survival at the end of the study. It is possible to implement a breeding program based on selection for species growth.
Directory of Open Access Journals (Sweden)
Christina N. Papadimitriou
2010-06-01
Full Text Available Storage devices are introduced in microgrids in order to secure their power quality, power regularity and offer ancillary services in a transient period. In the transition period of a low voltage microgrid, from the connected mode of operation to the islanded mode of operation, the power unbalance can be partly covered by the inertia energy of the existing power sources. This paper proposes fuzzy local controllers exploiting the inertia of a Wind Turbine (WT with a Doubly Fed Induction Generator (DFIG, if such a machine exists in the microgrid, in order to decrease the necessary storage devices and the drawbacks that arise. The proposed controllers are based in fuzzy logic due to the non linear and stochastic behavior of the system. Two cases are studied and compared during the transient period where the microgrid architecture and the DFIG controller differ. In the first case, the understudy microgrid includes a hybrid fuel cell system (FCS-battery system and a WT with a DFIGURE. The DFIG local controller in this case is also based in fuzzy logic and follows the classical optimum power absorption scenario for the WT. The transition of the microgrid from the connected mode of operation to the islanded mode is evaluated and, especially, the battery contribution is estimated. In the second case, the battery is eliminated. The fuzzy controller of the DFIG during the transition provides primary frequency control and local bus voltage support exploiting the WT inertia. The response of the system is estimated in both cases using MATLAB/Simulink software package.
Zha, N.; Capaldi, D. P. I.; Pike, D.; McCormack, D. G.; Cunningham, I. A.; Parraga, G.
2015-03-01
Pulmonary x-ray computed tomography (CT) may be used to characterize emphysema and airways disease in patients with chronic obstructive pulmonary disease (COPD). One analysis approach - parametric response mapping (PMR) utilizes registered inspiratory and expiratory CT image volumes and CT-density-histogram thresholds, but there is no consensus regarding the threshold values used, or their clinical meaning. Principal-component-analysis (PCA) of the CT density histogram can be exploited to quantify emphysema using data-driven CT-density-histogram thresholds. Thus, the objective of this proof-of-concept demonstration was to develop a PRM approach using PCA-derived thresholds in COPD patients and ex-smokers without airflow limitation. Methods: Fifteen COPD ex-smokers and 5 normal ex-smokers were evaluated. Thoracic CT images were also acquired at full inspiration and full expiration and these images were non-rigidly co-registered. PCA was performed for the CT density histograms, from which the components with the highest eigenvalues greater than one were summed. Since the values of the principal component curve correlate directly with the variability in the sample, the maximum and minimum points on the curve were used as threshold values for the PCA-adjusted PRM technique. Results: A significant correlation was determined between conventional and PCA-adjusted PRM with 3He MRI apparent diffusion coefficient (p<0.001), with CT RA950 (p<0.0001), as well as with 3He MRI ventilation defect percent, a measurement of both small airways disease (p=0.049 and p=0.06, respectively) and emphysema (p=0.02). Conclusions: PRM generated using PCA thresholds of the CT density histogram showed significant correlations with CT and 3He MRI measurements of emphysema, but not airways disease.
Directory of Open Access Journals (Sweden)
Yongli Wang
2018-03-01
Full Text Available With the application of distributed generation and the development of smart grid technology, micro-grid, an economic and stable power grid, tends to play an important role in the demand side management. Because micro-grid technology and demand response have been widely applied, what Demand Response actions can realize the economic operation of micro-grid has become an important issue for utilities. In this proposed work, operation optimization modeling for micro-grid is done considering distributed generation, environmental factors and demand response. The main contribution of this model is to optimize the cost in the context of considering demand response and system operation. The presented optimization model can reduce the operation cost of micro-grid without bringing discomfort to the users, thus increasing the consumption of clean energy effectively. Then, to solve this operational optimization problem, genetic algorithm is used to implement objective function and DR scheduling strategy. In addition, to validate the proposed model, it is employed on a smart micro-grid from Tianjin. The obtained numerical results clearly indicate the impact of demand response on economic operation of micro-grid and development of distributed generation. Besides, a sensitivity analysis on the natural gas price is implemented according to the situation of China, and the result shows that the natural gas price has a great influence on the operation cost of the micro-grid and effect of demand response.
International Nuclear Information System (INIS)
Silva, Hugo R.; Silva, Ademir X.; Rebello, Wilson F.; Silva, Maria G.
2011-01-01
This paper aims to present the results obtained by Monte Carlo simulation of the effect of shielding against neutrons, called External Shielding, to be placed on the heads of linear accelerators used in radiotherapy. For this, it was used the radiation transport code Monte Carlo N-Particle - MCNPX, in which were developed computational model of the head of the linear accelerator Varian 2300 C/D. The equipment was simulated within a bunker, operating at energies of 10, 15 and 18 MV, considering the rotation of the gantry at eight different angles ( 0 deg, 45 deg, 90 deg, 135 deg, 180 deg, 225 deg, 270 deg and 315 deg), in all cases, the equipment was modeled without and with the shielding positioned attached to the head of the accelerator on its bottom. In each of these settings, it was calculated the Ambient Dose Equivalent due to neutron H * (10)n on points situated in the region of the patient (region of interest for evaluation of undesirable neutron doses on the patient) and in the maze of radiotherapy room (region of interest for shielding the access door to the bunker). It was observed for all energies of equipment operation as well as for all angles of inclination of the gantry, a significant reduction in the values of H * (10) n when the equipment operated with the external shielding, both in the region of the patient as in the region of the maze. (author)
Monte Carlo techniques for analyzing deep penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications
Monte Carlo techniques for analyzing deep penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs
Exploring cluster Monte Carlo updates with Boltzmann machines
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
Exploring cluster Monte Carlo updates with Boltzmann machines.
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
Pillai, Rajesh S.; Brakenhoff, G. J.; Müller, M.
2006-09-01
The third harmonic generation (THG) axial response in the vicinity of an interface formed by two isotropic materials of normal dispersion is typically single peaked, with the maximum intensity at the interface position. Here it is shown experimentally that this THG z response may show anomalous behavior—being double peaked with a dip coinciding with the interface position—when the THG contributions from both materials are of similar magnitude. The observed anomalous behavior is explained, using paraxial Gaussian theory, by considering dispersion induced shape changes in the THG z response.
Ozdamar, Ozcan; Bohorquez, Jorge; Mihajloski, Todor; Yavuz, Erdem; Lachowska, Magdalena
2011-01-01
Electrophysiological indices of auditory binaural beats illusions are studied using late latency evoked responses. Binaural beats are generated by continuous monaural FM tones with slightly different ascending and descending frequencies lasting about 25 ms presented at 1 sec intervals. Frequency changes are carefully adjusted to avoid any creation of abrupt waveform changes. Binaural Interaction Component (BIC) analysis is used to separate the neural responses due to binaural involvement. The results show that the transient auditory evoked responses can be obtained from the auditory illusion of binaural beats.
Nemilentsev, Mikhail
2010-01-01
In the following paper a conceptual framework of the owner’s responsibility is created in order to study the transgenerational legal-economic ownership in the family business. Responsible ownership involves a sense of accountability and entrepreneurship to some extent. However, legal and social responsibilities naturally supplement each other in the family firm. Owners by means of personal relationships and financial guarantees are responsible for carrying out daily business operations and ma...
Multi-generational responses of a marine polychaete to a rapid change in seawater pCO2.
Rodríguez-Romero, Araceli; Jarrold, Michael D; Massamba-N'Siala, Gloria; Spicer, John I; Calosi, Piero
2016-10-01
Little is known of the capacity that marine metazoans have to evolve under rapid p CO 2 changes. Consequently, we reared a marine polychaete, Ophryotrocha labronica , previously cultured for approximately 33 generations under a low/variable pH regime, under elevated and low p CO 2 for six generations. The strain used was found to be tolerant to elevated p CO 2 conditions. In generations F1 and F2 females' fecundity was significantly lower in the low p CO 2 treatment. However, from generation F3 onwards there were no differences between p CO 2 treatments, indicating that trans-generational effects enabled the restoration and maintenance of reproductive output. Whilst the initial fitness recovery was likely driven by trans-generational plasticity (TGP), the results from reciprocal transplant assays, performed using F7 individuals, made it difficult to disentangle between whether TGP had persisted across multiple generations, or if evolutionary adaptation had occurred. Nonetheless, both are important mechanisms for persistence under climate change. Overall, our study highlights the importance of multi-generational experiments in more accurately determining marine metazoans' responses to changes in p CO 2 , and strengthens the case for exploring their use in conservation, by creating specific p CO 2 tolerant strains of keystone ecosystem species.
Early antihepatitis C virus response with second-generation C200/C22 ELISA
van der Poel, C. L.; Bresters, D.; Reesink, H. W.; Plaisier, A. A.; Schaasberg, W.; Leentvaar-Kuypers, A.; Choo, Q. L.; Quan, S.; Polito, A.; Houghton, M.
1992-01-01
Detection of early antibody to hepatitis C virus (HCV) by a new second-generation C200/C22 anti-HCV enzyme-linked immunosorbent assay (ELISA) and a four-antigen recombinant immunoblot assay (4-RIBA) was compared with the first-generation anti-HCV C100 ELISA using sequential serum samples of 9
Mixed Methods Case Study of Generational Patterns in Responses to Shame and Guilt
Ng, Tony
2013-01-01
Moral socialization and moral learning are antecedents of moral motivation. As many as 4 generations interact in workplace and education settings; hence, a deeper understanding of the moral motivation of members of those generations is needed. The purpose of this convergent mixed methods case study was to understand the moral motivation of 5…
Petrocelli, John V; Dowd, Keith
2009-09-01
Punitive responses to crime have been linked to a relatively low need for cognition (NFC). Sargent's (2004) findings suggest that this relationship is due to a relatively complex attributional system, employed by high-NFC individuals, which permits them to recognize potential external or situational causes of crime. However, high-NFC individuals may also be more likely to engage in counterfactual thinking, which has been linked to greater judgments of blame and responsibility. Three studies examine the relationship between trait and state NFC and punitiveness in light of counterfactual thinking. Results suggest that the ease of generating upward counterfactuals in response to an unfortunate crime moderates the NFC-punitiveness relationship, such that high-NFC individuals are less punitive than low-NFC individuals only when counterfactual thoughts are relatively difficult to generate. These findings are discussed in light of punishment theory and their possible implications with regard to the legal system.
PENENTUAN HARGA OPSI BELI TIPE ASIA DENGAN METODE MONTE CARLO-CONTROL VARIATE
Directory of Open Access Journals (Sweden)
NI NYOMAN AYU ARTANADI
2017-01-01
Full Text Available Option is a contract between the writer and the holder which entitles the holder to buy or sell an underlying asset at the maturity date for a specified price known as an exercise price. Asian option is a type of financial derivatives which the payoff taking the average value over the time series of the asset price. The aim of the study is to present the Monte Carlo-Control Variate as an extension of Standard Monte Carlo applied on the calculation of the Asian option price. Standard Monte Carlo simulations 10.000.000 generate standard error 0.06 and the option price convergent at Rp.160.00 while Monte Carlo-Control Variate simulations 100.000 generate standard error 0.01 and the option price convergent at Rp.152.00. This shows the Monte Carlo-Control Variate achieve faster option price toward convergent of the Monte Carlo Standar.
Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
Demand Response Programs Design and Use Considering Intensive Penetration of Distributed Generation
Faria, Pedro; Vale, Zita; Baptista, José
2015-01-01
Further improvements in demand response programs implementation are needed in order to take full advantage of this resource, namely for the participation in energy and reserve market products, requiring adequate aggregation and remuneration of small size resources. The present paper focuses on SPIDER, a demand response simulation that has been improved in order to simulate demand response, including realistic power system simulation. For illustration of the simulator’s capabilities, the prese...
Trans?generational plasticity in response to immune challenge is constrained by heat stress
Roth, Olivia; Landis, Susanne H.
2017-01-01
Trans-generational plasticity is the adjustment of phenotypes to changing habitat conditions that persist longer than the individual lifetime. Fitness benefits (adaptive TGP) are expected upon matching parent-offspring environments. In a global change scenario, several performance-related environmental factors are changing simultaneously. This lowers the predictability of offspring environmental conditions, potentially hampering the benefits of trans-generational plasticity. For the first tim...
A Monte Carlo study on event-by-event transverse momentum fluctuation at RHIC
International Nuclear Information System (INIS)
Xu Mingmei
2005-01-01
The experimental observation on the multiplicity dependence of event-by-event transverse momentum fluctuation in relativistic heavy ion collisions is studied using Monte Carlo simulation. It is found that the Monte Carlo generator HIJING is unable to describe the experimental phenomenon well. A simple Monte Carlo model is proposed, which can recover the data and thus shed some light on the dynamical origin of the multiplicity dependence of event-by-event transverse momentum fluctuation. (authors)
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay; Law, Kody; Suciu, Carina
2017-01-01
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Advanced Multilevel Monte Carlo Methods
Jasra, Ajay
2017-04-24
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
Monte Carlo simulation for IRRMA
International Nuclear Information System (INIS)
Gardner, R.P.; Liu Lianyan
2000-01-01
Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors
Generating Global Brand Equity through Corporate Social Responsibility to Key Stakeholders
Torres Lacomba, Anna; Atribo, Jo; Bijmolt, Tammo H.A.
2010-01-01
In this paper we argue that socially responsible policies have positive short-term and long-term impact on equity of global brands. We find that corporate social responsibility towards all stakeholders, whether primary (customers, shareholders, employees and suppliers) or secondary (community), have
Monte Carlo simulation for the transport beamline
Energy Technology Data Exchange (ETDEWEB)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Monte Carlo simulation for the transport beamline
International Nuclear Information System (INIS)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A.; Attili, A.; Marchetto, F.; Russo, G.; Cirrone, G. A. P.; Schillaci, F.; Scuderi, V.; Carpinelli, M.; Tramontana, A.
2013-01-01
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery
Trans-generational plasticity in response to immune challenge is constrained by heat stress.
Roth, Olivia; Landis, Susanne H
2017-06-01
Trans-generational plasticity (TGP) is the adjustment of phenotypes to changing habitat conditions that persist longer than the individual lifetime. Fitness benefits (adaptive TGP) are expected upon matching parent-offspring environments. In a global change scenario, several performance-related environmental factors are changing simultaneously. This lowers the predictability of offspring environmental conditions, potentially hampering the benefits of TGP. For the first time, we here explore how the combination of an abiotic and a biotic environmental factor in the parental generation plays out as trans-generational effect in the offspring. We fully reciprocally exposed the parental generation of the pipefish Syngnathus typhle to an immune challenge and elevated temperatures simulating a naturally occurring heatwave. Upon mating and male pregnancy, offspring were kept in ambient or elevated temperature regimes combined with a heat-killed bacterial epitope treatment. Differential gene expression (immune genes and DNA- and histone-modification genes) suggests that the combined change of an abiotic and a biotic factor in the parental generation had interactive effects on offspring performance, the temperature effect dominated over the immune challenge impact. The benefits of certain parental environmental conditions on offspring performance did not sum up when abiotic and biotic factors were changed simultaneously supporting that available resources that can be allocated to phenotypic trans-generational effects are limited. Temperature is the master regulator of trans-generational phenotypic plasticity, which potentially implies a conflict in the allocation of resources towards several environmental factors. This asks for a reassessment of TGP as a short-term option to buffer environmental variation in the light of climate change.
Adjoint electron Monte Carlo calculations
International Nuclear Information System (INIS)
Jordan, T.M.
1986-01-01
Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment
Monte Carlo theory and practice
International Nuclear Information System (INIS)
James, F.
1987-01-01
Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem
A microculture system for generating haemolytic antibody responses from human tonsillar lymphocytes.
Booth, R J
1979-01-01
Small numbers of Ficoll-Hypaque purified human tonsillar lymphocytes were stimulated with PWM to produce SRBC-specific PFC in a microculture system. The magnitude of the response varied among different tonsils but was typically between 200 and 1000 PFC/10(6) cells cultured. Little or no response was observed in the absence of PWM. SRBC failed to stimulate a SRBC-specific response and the presence of this antigen in PWM-stimulated cultures depressed the response. The time of the maximum response was inversely related to the number of cells cultured. In addition, the duration of the response was limited by rapid depletion of critical medium requirements and/or build up of inhibitory factors especially when the cell concentration exceeded 5 x 10(5) cells/culture. This effect could be partially overcome by daily feeding of cultures with fresh medium. Fractionation studies indicated a requirement for both T and B cell populations. Constant efficiency of PFC production with respect to cell number could be achieved by the addition of inactivated autologous 'filler' cells. The significance of these results and applicability of the microculture system to a detailed analysis of human antibody responses will be discussed.
International Nuclear Information System (INIS)
Falsafi, Hananeh; Zakariazadeh, Alireza; Jadid, Shahram
2014-01-01
This paper focuses on using DR (Demand Response) as a means to provide reserve in order to cover uncertainty in wind power forecasting in SG (Smart Grid) environment. The proposed stochastic model schedules energy and reserves provided by both of generating units and responsive loads in power systems with high penetration of wind power. This model is formulated as a two-stage stochastic programming, where first-stage is associated with electricity market, its rules and constraints and the second-stage is related to actual operation of the power system and its physical limitations in each scenario. The discrete retail customer responses to incentive-based DR programs are aggregated by DRPs (Demand Response Providers) and are submitted as a load change price and amount offer package to ISO (Independent System Operator). Also, price-based DR program behavior and random nature of wind power are modeled by price elasticity concept of the demand and normal probability distribution function, respectively. In the proposed model, DRPs can participate in energy market as well as reserve market and submit their offers to the wholesale electricity market. This approach is implemented on a modified IEEE 30-bus test system over a daily time horizon. The simulation results are analyzed in six different case studies. The cost, emission and multiobjective functions are optimized in both without and with DR cases. The multiobjective generation scheduling model is solved using augmented epsilon constraint method and the best solution can be chosen by Entropy and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods. The results indicate demand side participation in energy and reserve scheduling reduces the total operation costs and emissions. - Highlights: • Simultaneous participation of loads in both energy and reserve scheduling. • Environmental/economical scheduling of energy and reserve. • Using demand response for covering wind generation forecast
Alternative implementations of the Monte Carlo power method
International Nuclear Information System (INIS)
Blomquist, R.N.; Gelbard, E.M.
2002-01-01
We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency
Generating cryptographic keys by radioactive decays
International Nuclear Information System (INIS)
Grupen, Claus; Maurer, Ingo; Schmidt, Dieter; Smolik, Ludek
2001-01-01
We are presenting a new method for the generation of statistically genuine random bitstream with very high frequency which can be employed for cryptographic purposes. The method uses the feature of statistically unpredictable radioactive decays as the source of randomness. The measured quantity is the time distance between the responses of a small ionisation chamber due to the recording of ionising decay products. This time measurement is converted into states representing 0o r 1. The data generated in our experiment successfully passed FIPS PUB 140-1 and die hard statistical tests. For the simulation of systematic effects Monte Carlo techniques were used
Directory of Open Access Journals (Sweden)
Hettne Kristina M
2013-01-01
Full Text Available Abstract Background Availability of chemical response-specific lists of genes (gene sets for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM, and that these can be used with gene set analysis (GSA methods for chemical treatment identification, for pharmacological mechanism elucidation, and for comparing compound toxicity profiles. Methods We created 30,211 chemical response-specific gene sets for human and mouse by next-gen TM, and derived 1,189 (human and 588 (mouse gene sets from the Comparative Toxicogenomics Database (CTD. We tested for significant differential expression (SDE (false discovery rate -corrected p-values Results Next-gen TM-derived gene sets matching the chemical treatment were significantly altered in three GE data sets, and the corresponding CTD-derived gene sets were significantly altered in five GE data sets. Six next-gen TM-derived and four CTD-derived fibrate gene sets were significantly altered in the PPARA knock-out GE dataset. None of the fibrate signatures in cMap scored significant against the PPARA GE signature. 33 environmental toxicant gene sets were significantly altered in the triazole GE data sets. 21 of these toxicants had a similar toxicity pattern as the triazoles. We confirmed embryotoxic effects, and discriminated triazoles from other chemicals. Conclusions Gene set analysis with next-gen TM-derived chemical response-specific gene sets is a scalable method for identifying similarities in gene responses to other chemicals, from which one may infer potential mode of action and/or toxic effect.
Directory of Open Access Journals (Sweden)
Tássio F. L. Matos
2007-12-01
Full Text Available Os resíduos poliméricos pós-consumo - embalagens plásticas - se destacam nos resíduos sólidos domiciliares, por apresentarem crescimento de participação no lixo urbano e possuírem características como degradação lenta e volumetria elevada, o que compromete a vida útil dos aterros sanitários. Ressalta-se, ainda, o potencial econômico para reutilização e reciclagem dos resíduos poliméricos. Este trabalho tem por objetivo apresentar a composição percentual, mássica, dos resíduos poliméricos no município de São Carlos, SP. O método adotado para caracterização, para determinação da composição, foi por amostragem, na coleta regular, e pela massa total coletada, na coleta seletiva. Na coleta regular, o número de amostragem compreendeu todos os quinze setores, sendo a massa da amostra obtida por quarteamento. Foi feita uma caracterização no inverno e outra no verão. Destacam-se, nos resultados da coleta regular, os percentuais em massa de 10,47% de resíduos poliméricos, o PET com maior participação (35,96% nas resinas e o uso preferencial em embalagens para alimentação (56,42%, e, na coleta seletiva, os percentuais de 20,60% de resíduos poliméricos, a resina PET com maior participação (50,64% e uso preferencial em embalagens para alimentação (66,06%. Os resultados evidenciam a tendência de crescimento de resíduos de PET e do uso preferencial dos polímeros termoplásticos em embalagens para alimentação.Polymeric wastes have become important in household waste, with increasing contribution to the urban waste. They present the following characteristics: slow degradation and high volume, which compromises the life cycle of sanitary landfills, and may also have economical potential in the re-use and recycle. The objective of this work is to present a characterization of the polymeric wastes conducted in the conventional and selective collection of the municipality of São Carlos, SP. The method adopted
International Nuclear Information System (INIS)
Aghajani, G.R.; Shayanfar, H.A.; Shayeghi, H.
2015-01-01
Highlights: • Using DRPs to cover the uncertainties resulted from power generation by WT and PV. • Proposing the use of price-offer packages and amount of DR for implement DRPs. • Considering a multi-objective scheduling model and use of MOPSO algorithm. - Abstract: In this paper, a multi-objective energy management system is proposed in order to optimize micro-grid (MG) performance in a short-term in the presence of Renewable Energy Sources (RESs) for wind and solar energy generation with a randomized natural behavior. Considering the existence of different types of customers including residential, commercial, and industrial consumers can participate in demand response programs. As with declare their interruptible/curtailable demand rate or select from among different proposed prices so as to assist the central micro-grid control in terms of optimizing micro-grid operation and covering energy generation uncertainty from the renewable sources. In this paper, to implement Demand Response (DR) schedules, incentive-based payment in the form of offered packages of price and DR quantity collected by Demand Response Providers (DRPs) is used. In the typical micro-grid, different technologies including Wind Turbine (WT), PhotoVoltaic (PV) cell, Micro-Turbine (MT), Full Cell (FC), battery hybrid power source and responsive loads are used. The simulation results are considered in six different cases in order to optimize operation cost and emission with/without DR. Considering the complexity and non-linearity of the proposed problem, Multi-Objective Particle Swarm Optimization (MOPSO) is utilized. Also, fuzzy-based mechanism and non-linear sorting system are applied to determine the best compromise considering the set of solutions from Pareto-front space. The numerical results represented the effect of the proposed Demand Side Management (DSM) scheduling model on reducing the effect of uncertainty obtained from generation power and predicted by WT and PV in a MG.
International Nuclear Information System (INIS)
Hobbs, B.F.; Rijkers, F.A.M.
2004-05-01
The conjectured supply function (CSF) model calculates an oligopolistic equilibrium among competing generating companies (GenCos), presuming that GenCos anticipate that rival firms will react to price increases by expanding their sales at an assumed rate. The CSF model is generalized here to include each generator's conjectures concerning how the price of transmission services (point-to-point service and constrained interfaces) will be affected by the amount of those services that the generator demands. This generalization reflects the market reality that large producers will anticipate that they can favorably affect transmission prices by their actions. The model simulates oligopolistic competition among generators while simultaneously representing a mixed transmission pricing system. This mixed system includes fixed transmission tariffs, congestion-based pricing of physical transmission constraints (represented as a linearized dc load flow), and auctions of interface capacity in a path-based pricing system. Pricing inefficiencies, such as export fees and no credit for counterflows, can be simulated. The model is formulated as a linear mixed complementarity problem, which enables very large market models to be solved. In the second paper of this two-paper series, the capabilities of the model are illustrated with an application to northwest Europe, where transmission pricing is based on such a mixture of approaches
Evidence of multipolar response of Bacteriorhodopsin by noncollinear second harmonic generation.
Bovino, F A; Larciprete, M C; Sibilia, C; Váró, G; Gergely, C
2012-06-18
Noncollinear second harmonic generation from a Bacteriorhodopsin (BR) oriented multilayer film was systematically investigated by varying the polarization state of both fundamental beams. Both experimental results and theoretical simulations, show that the resulting polarization mapping is an useful tool to put in evidence the optical chirality of the investigated film as well as the corresponding multipolar contributions to the nonlinear.
Responsive Social Agents: Feedback-Sensitive Behavior Generation for Social Interactions
Vroon, Jered Hendrik; Englebienne, Gwenn; Evers, Vanessa; Agah, Arvin; Cabibihan, John-John; Howard, Ayanna M.; Salichs, Miguel A.; He, Hongsheng
2016-01-01
How can we generate appropriate behavior for social artificial agents? A common approach is to (1) establish with controlled experiments which action is most appropriate in which setting, and (2) select actions based on this knowledge and an estimate of the setting. This approach faces challenges,
Amyloid-β secretion, generation, and lysosomal sequestration in response to proteasome inhibition
DEFF Research Database (Denmark)
Agholme, Lotta; Hallbeck, Martin; Benedikz, Eirikur
2012-01-01
, as the autophagosome has been suggested as a site of amyloid-β (Aβ) generation. In this study, we investigated the effect of proteasome inhibition on Aβ accumulation and secretion, as well as the processing of amyloid-β protein precursor (AβPP) in AβPP(Swe) transfected SH-SY5Y neuroblastoma cells. We show...
Trans-generational responses to low pH depend on parental gender in a calcifying tubeworm.
Lane, Ackley; Campanati, Camilla; Dupont, Sam; Thiyagarajan, Vengatesen
2015-06-03
The uptake of anthropogenic CO2 emissions by oceans has started decreasing pH and carbonate ion concentrations of seawater, a process called ocean acidification (OA). Occurring over centuries and many generations, evolutionary adaptation and epigenetic transfer will change species responses to OA over time. Trans-generational responses, via genetic selection or trans-generational phenotypic plasticity, differ depending on species and exposure time as well as differences between individuals such as gender. Males and females differ in reproductive investment and egg producing females may have less energy available for OA stress responses. By crossing eggs and sperm from the calcareous tubeworm Hydroides elegans (Haswell, 1883) raised in ambient (8.1) and low (7.8) pH environments, we observed that paternal and maternal low pH experience had opposite and additive effects on offspring. For example, when compared to offspring with both parents from ambient pH, growth rates of offspring of fathers or mothers raised in low pH were higher or lower respectively, but there was no difference when both parents were from low pH. Gender differences may result in different selection pressures for each gender. This may result in overestimates of species tolerance and missed opportunities of potentially insightful comparisons between individuals of the same species.
International Nuclear Information System (INIS)
Fitch, F.W.; Engers, H.D.; Cerottini, J.C.; Bruner, K.T.
1976-01-01
Irradiated cells obtained from MLC at the peak of the CTL response caused profound suppression of generation of CTL when added in small numbers at the initiation of primary MLC prepared with normal spleen cells. The inhibitory activity of the MLC cells was not affected by irradiation (1000 rads) but was abolished by treatment with anti-theta serum and complement. The suppression was immunologically specific. The response of A (H-2/sup a/) spleen cells toward C3H (H-2/sup k/) alloantigens was suppressed by irradiated MLC cells obtained from MLC prepared with A spleen cells and irradiated C3H-stimulating cells, whereas the response of A spleen cells toward DBA/2 (H-2/sup d/) alloantigens was affected relatively little. However, if irradiated C3H x DBA/2F1 hybrid spleen cells were used to stimulate A spleen cells in MLC, addition of irradiated MLC cells having cytotoxic activity toward C3H antigens abolished the response to both C3H and DBA/2 antigens. The response to DBA/2 antigens was much less affected when a mixture of irradiated C3H and DBA/2 spleen cells was used as stimulating cells. Thus, the presence of MLC cells having cytotoxic activity toward one alloantigen abolished the response to another non-cross-reacting antigen only when both antigens were present on the same F1 hybrid-stimulating cells. This suppression of generation of CTL by irradiated MLC cells apparently involves inactivation of alloantigen-bearing stimulating cells as a result of residual cytotoxic activity of the irradiated MLC cells. This mechanism may be active during the decline in CTL activity noted in the normal immune response in vivo and in vitro
Monte Carlo simulation of a gas-sampled hadron calorimeter
Energy Technology Data Exchange (ETDEWEB)
Chang, C Y; Kunori, S; Rapp, P; Talaga, R; Steinberg, P; Tylka, A J; Wang, Z M
1988-02-15
A prototype of the OPAL barrel hadron calorimeter, which is a gas-sampled calorimeter using plastic streamer tubes, was exposed to pions at energies between 1 and 7 GeV. The response of the detector was simulated using the CERN GEANT3 Monte Carlo program. By using the observed high energy muon signals to deduce details of the streamer formation, the Monte Carlo program was able to reproduce the observed calorimeter response. The behavior of the hadron calorimeter when placed behind a lead glass electromagnetic calorimeter was also investigated.
Competitive inhibition can linearize dose-response and generate a linear rectifier.
Savir, Yonatan; Tu, Benjamin P; Springer, Michael
2015-09-23
Many biological responses require a dynamic range that is larger than standard bi-molecular interactions allow, yet the also ability to remain off at low input. Here we mathematically show that an enzyme reaction system involving a combination of competitive inhibition, conservation of the total level of substrate and inhibitor, and positive feedback can behave like a linear rectifier-that is, a network motif with an input-output relationship that is linearly sensitive to substrate above a threshold but unresponsive below the threshold. We propose that the evolutionarily conserved yeast SAGA histone acetylation complex may possess the proper physiological response characteristics and molecular interactions needed to perform as a linear rectifier, and we suggest potential experiments to test this hypothesis. One implication of this work is that linear responses and linear rectifiers might be easier to evolve or synthetically construct than is currently appreciated.
Investigation of Response of Several Neutron Surveymeters by a DT Neutron Generator
International Nuclear Information System (INIS)
Kim, Sang In; Jang, In Su; Kim, Jang Lyul; Lee, Jung IL; Kim, Bong Hwan
2012-01-01
Several neutron measuring devices were tested under the neutron fields characterized with two distinct kinds of thermal and fast neutron spectrum. These neutron fields were constructed by the mixing of both thermal neutron fields and fast neutron fields. The thermal neutron field was constructed using by a graphite pile with eight AmBe neutron sources. The fast neutron field of 14 MeV was made by a DT neutron generator. In order to change the fraction of fast neutron fluence rate in each neutron fields, a neutron generator was placed in the thermal neutron field at 50 cm and 150 cm from the reference position. The polyethylene neutron collimator was used to make moderated 14 MeV neutron field. These neutron spectra were measured by using a Bonner sphere system with an LiI scintillator, and dosimetric quantities delivered to neutron surveymeters were determined from these measurement results.
International Nuclear Information System (INIS)
Silva, J.O. da; Magalhaes, C.M.S. de; Santos, L.A.P. dos
2007-01-01
Commercial bipolar phototransistors have been used as detectors for low energy X-rays. However, when they are used in high energy X-ray beams, there is a certain loss of sensitivity to the ionizing radiation. This damage is cumulative and irreversible. There are several factors that yield variations in the phototransistor response when it is under high energy radiation, such as its fabrication technology and its electrical characteristics. The aim of this work is to present experimental results that are used to correlate the response curve of SMT (Surface-Mount Technology) bipolar phototransistors with their loss of sensitivity after irradiation from a Linac (linear accelerator) megavoltage beams. (author)
The facilitation of wind generation in Ireland's electricity market using demand response.
Finn, Patrick M.
2011-01-01
peer-reviewed As part of a European Union climate change and energy package that aims to reduce greenhouse gases by 20%, reach 20% penetration of renewable energy, and improve energy efficiency by 20% by 2020, Ireland has committed to generating 40% of its electricity using indigenous renewable sources, primarily wind, by 2020. As wind is an intermittent energy source, a key challenge will be to increase the flexibility of the electricity system in order to maximise yields from th...
DEFF Research Database (Denmark)
Fahnøe, Ulrik; Orton, Richard; Höper, Dirk
Next Generation Sequencing (NGS) has rapidly become the preferred technology in nucleotide sequencing, and can be applied to unravel molecular adaptation of RNA viruses such as Classical Swine Fever Virus (CSFV). However, the detection of low frequency variants within viral populations by NGS...... is affected by errors introduced during sample preparation and sequencing, and so far no definitive solution to this problem has been presented....
Syed, Faisal M; Khan, Masood A; Nasti, Tahseen H; Ahmad, Nadeem; Mohammad, Owais
2003-06-02
In previous study, we demonstrated the potential of Escherichia coli (E. coli) lipid liposomes (escheriosomes) to undergo membrane-membrane fusion with cytoplasmic membrane of the target cells including professional antigen presenting cells. Our present study demonstrates that antigen encapsulated in escheriosomes could be successfully delivered simultaneously to the cytosolic as well as endosomal processing pathways of antigen presenting cells, leading to the generation of both CD4(+) T-helper and CD8(+) cytotoxic T cell response. In contrast, encapsulation of same antigen in egg phosphatidyl-choline (egg PC) liposomes, just like antigen-incomplete Freund's adjuvant (IFA) complex, has inefficient access to the cytosolic pathway of MHC I-dependent antigen presentation and failed to generate antigen-specific CD8(+) cytotoxic T cell response. However, both egg PC liposomes as well as escheriosomes-encapsulated antigen elicited strong humoral immune response in immunized animals but antibody titre was significantly higher in the group of animals immunized with escheriosomes-encapsulated antigen. These results imply usage of liposome-based adjuvant as potential candidate vaccine capable of eliciting both cell-mediated as well as humoral immune responses. Furthermore, antigen entrapped in escheriosomes stimulates antigen-specific CD4(+) T cell proliferation and also enhances the level of IL-2, IFN-gamma and IL-4 in the immunized animals.
Directory of Open Access Journals (Sweden)
Janet C Lindow
Full Text Available The four dengue virus serotypes (DENV-1-DENV-4 have a large impact on global health, causing 50-100 million cases of dengue fever annually. Herein, we describe the first kinetic T cell response to a low-dose DENV-1 vaccination study (10 PFU in humans. Using flow cytometry, we found that proinflammatory cytokines, IFNγ, TNFα, and IL-2, were generated by DENV-1-specific CD4(+ cells 21 days post-DENV-1 exposure, and their production continued through the latest time-point, day 42 (p<0.0001 for all cytokines. No statistically significant changes were observed at any time-points for IL-10 (p = 0.19, a regulatory cytokine, indicating that the response to DENV-1 was primarily proinflammatory in nature. We also observed little T cell cross-reactivity to the other 3 DENV serotypes. The percentage of multifunctional T cells (T cells making ≥ 2 cytokines simultaneously increased with time post-DENV-1 exposure (p<0.0001. The presence of multifunctional T cells together with neutralizing antibody data suggest that the immune response generated to the vaccine may be protective. This work provides an initial framework for defining primary T cell responses to each DENV serotype and will enhance the evaluation of a tetravalent DENV vaccine.
Pseudo-Random Number Generators
Howell, L. W.; Rheinfurth, M. H.
1984-01-01
Package features comprehensive selection of probabilistic distributions. Monte Carlo simulations resorted to whenever systems studied not amenable to deterministic analyses or when direct experimentation not feasible. Random numbers having certain specified distribution characteristic integral part of simulations. Package consists of collector of "pseudorandom" number generators for use in Monte Carlo simulations.
Monte Carlo simulation of virtual compton scattering at MAMI
International Nuclear Information System (INIS)
D'Hose, N.; Ducret, J.E.; Gousset, TH.; Guichon, P.A.M.; Kerhoas, S.; Lhuillier, D.; Marchand, C.; Marchand, D.; Martino, J.; Mougey, J.; Roche, J.; Vanderhaeghen, M.; Vernin, P.; Bohm, H.; Distler, M.; Edelhoff, R.; Friedrich, J.M.; Geiges, R.; Jennewein, P.; Kahrau, M.; Korn, M.; Kramer, H.; Krygier, K.W.; Kunde, V.; Liesenfeld, A.; Merkel, H.; Merle, K.; Neuhausen, R.; Pospischil, TH.; Rosner, G.; Sauer, P.; Schmieden, H.; Schardt, S.; Tamas, G.; Wagner, A.; Walcher, TH.; Wolf, S.; Hyde-Wright, CH.; Boeglin, W.U.; Van de Wiele, J.
1996-01-01
The Monte Carlo simulation developed specially for the VCS experiments taking place at MAMI in fully described. This simulation can generate events according to the Bethe-Heitler + Born cross section behaviour and takes into account resolution deteriorating effects. It is used to determine solid angles for the various experimental settings. (authors)
The Hybrid Monte Carlo (HMC) method and dynamic fermions
International Nuclear Information System (INIS)
Amaral, Marcia G. do
1994-01-01
Nevertheless the Monte Carlo method has been extensively used in the simulation of many types of theories, the successful application has been established only for models containing boson fields. With the present computer generation, the development of faster and efficient algorithms became necessary and urgent. This paper studies the HMC and the dynamic fermions
Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion
DEFF Research Database (Denmark)
Zunino, Andrea; Lange, Katrine; Melnikova, Yulia
2014-01-01
We present a study on the inversion of seismic reflection data generated from a synthetic reservoir model. Our aim is to invert directly for rock facies and porosity of the target reservoir zone. We solve this inverse problem using a Markov chain Monte Carlo (McMC) method to handle the nonlinear...
Generating global brand equity through corporate social responsibility to key stakeholders
Torres, Anna; Bijmolt, Tammo H. A.; Tribo, Josep A.; Verhoef, Peter
In this paper, we argue that corporate social responsibility (CSR) to various stakeholders (customers, shareholders, employees, suppliers, and community) has a positive effect on global brand equity (BE). In addition, policies aimed at satisfying community interests help reinforce the credibility of
Coleman, Simon; Azouz, Aymen Ben; Schiphorst, Jeroen Ter; Saez, Janire; Whyte, Jeffrey; McCluskey, Peter; Kent, Nigel; Benito-Lopez, Fernando; Schenning, Albert; Diamond, Dermot
2016-01-01
The requirement of significant off-chip fluid manipulation using high-cost mechanical components has resulted in design limitations in microfluidic devices. We report the use of novel stimuli responsive polymer gel materials for a variety of bio-inspired processes to achieve in-situ microfluidic
Flexibility dynamics in clusters of residential demand response and distributed generation
MacDougall, P.A.; Kok, J.K.; Warmer, C.; Roossien, B.
2013-01-01
Supply and demand response is a untapped resource in the current electrical system. However little work has been done to investigate the dynamics of utilizing such flexibility as well as the potential effects it could have on the infrastructure. This paper provides a starting point to seeing the
An emergency response intercomparison exercise using a synthetically generated gamma-ray spectrum
DEFF Research Database (Denmark)
Dowdall, M.; Selnæs, O.G.; Standring, W.J.F.
2010-01-01
Although high resolution gamma ray spectrometry serves as the primary analytical technique in emergency response situations, chances for laboratories to practice analysing the type of spectra that may be expected in the early phase of such a situation are limited. This problem is more acute for l...
Hot pixel generation in active pixel sensors: dosimetric and micro-dosimetric response
Scheick, Leif; Novak, Frank
2003-01-01
The dosimetric response of an active pixel sensor is analyzed. heavy ions are seen to damage the pixel in much the same way as gamma radiation. The probability of a hot pixel is seen to exhibit behavior that is not typical with other microdose effects.
Fernández-Toro, María; Furnborough, Concha
2014-01-01
Despite the potential benefits of assignment feedback, learners often fail to use it effectively. This study examines the ways in which adult distance learners engage with written feedback on one of their assignments. Participants were 10 undergraduates studying Spanish at the Open University, UK. Their responses to feedback were elicited by means…
Response of pressurized water reactor (PWR) to network power generation demands
International Nuclear Information System (INIS)
Schreiner, L.A.
1991-01-01
The flexibility of the PWR type reactor in terms of response to the variations of the network power demands, is demonstrated. The factors that affect the transitory flexibility and some design prospects that allow the reactor fits the requirements of the network power demands, are also discussed. (M.J.A.)
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-01-01
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Combustion technology developments in power generation in response to environmental challenges
Energy Technology Data Exchange (ETDEWEB)
BeerBeer, J.M. [Massachusetts Inst. of Technology, Dept. of Chemical Engineering, Cambridge, MA (United States)
2000-07-01
Combustion system development in power generation is discussed ranging from the pre-environmental era in which the objectives were complete combustion with a minimum of excess air and the capability of scale up to increased boiler unit performances, through the environmental era (1970-), in which reduction of combustion generated pollution was gaining increasing importance, to the present and near future in which a combination of clean combustion and high thermodynamic efficiency is considered to be necessary to satisfy demands for CO{sub 2} emissions mitigation. From the 1970's on, attention has increasingly turned towards emission control technologies for the reduction of oxides of nitrogen and sulfur, the so-called acid rain precursors. By a better understanding of the NO{sub x} formation and destruction mechanisms in flames, it has become possible to reduce significantly their emissions via combustion process modifications, e.g. by maintaining sequentially fuel-rich and fuel-lean combustion zones in a burner flame or in the combustion chamber, or by injecting a hydrocarbon rich fuel into the NO{sub x} bearing combustion products of a primary fuel such as coal. Sulfur capture in the combustion process proved to be more difficult because calcium sulfate, the reaction product of SO{sub 2} and additive lime, is unstable at the high temperature of pulverised coal combustion. It is possible to retain sulfur by the application of fluidised combustion in which coal burns at much reduced combustion temperatures. Fluidised bed combustion is, however, primarily intended for the utilisation of low grade, low volatile coals in smaller capacity units, which leaves the task of sulfur capture for the majority of coal fired boilers to flue gas desulfurisation. During the last decade, several new factors emerged which influenced the development of combustion for power generation. CO{sub 2} emission control is gaining increasing acceptance as a result of the international
Tool for generation of seismic floor response spectra for secondary system design
International Nuclear Information System (INIS)
Cardoso, Tarcisio F.; Almeida, Andreia A. Diniz de
2009-01-01
The spectral analysis is still a valuable method to the seismic structure design, especially when one focalizes the topics of secondary systems in large industrial installations, as nuclear power plants. Two aspects of this situation add their arguments to recommend the use of this kind of analysis: the random character of the excitation and the multiplicity and the variability of the secondary systems. The first aspect can be managed if one assumes the site seismicity represented by a power spectrum density function of the ground acceleration, and then, by the systematic resolution of a first passage problem, to develop a uniformly probable response spectrum. The second one suggests also a probabilistic approach to the response spectrum in order to be representative all over the extensive group of systems with different characteristics, which can be enrolled in a plant. The present paper proposes a computational tool to achieve in-structure floor response spectra for secondary system design, which includes a probabilistic approach and considers coupling effects between primary and inelastic secondary systems. The analysis is performed in the frequency domain, with SASSI2000 system. A set of auxiliary programs are developed to consider three-dimensional models and their responses to a generic base excitation, acting in 3 orthogonal directions. The ground excitation is transferred to a secondary system SDOF model conveniently attached to the primary system. Then, a uniformly probable coupled response spectrum is obtained using a first passage analysis. In this work, the ExeSASSI program is created to manage SASSI2000 several modules and a set of auxiliary programs created to perform the probabilistic analyses. (author)
Monte Carlo simulation of MOSFET dosimeter for electron backscatter using the GEANT4 code.
Chow, James C L; Leung, Michael K K
2008-06-01
The aim of this study is to investigate the influence of the body of the metal-oxide-semiconductor field effect transistor (MOSFET) dosimeter in measuring the electron backscatter from lead. The electron backscatter factor (EBF), which is defined as the ratio of dose at the tissue-lead interface to the dose at the same point without the presence of backscatter, was calculated by the Monte Carlo simulation using the GEANT4 code. Electron beams with energies of 4, 6, 9, and 12 MeV were used in the simulation. It was found that in the presence of the MOSFET body, the EBFs were underestimated by about 2%-0.9% for electron beam energies of 4-12 MeV, respectively. The trend of the decrease of EBF with an increase of electron energy can be explained by the small MOSFET dosimeter, mainly made of epoxy and silicon, not only attenuated the electron fluence of the electron beam from upstream, but also the electron backscatter generated by the lead underneath the dosimeter. However, this variation of the EBF underestimation is within the same order of the statistical uncertainties as the Monte Carlo simulations, which ranged from 1.3% to 0.8% for the electron energies of 4-12 MeV, due to the small dosimetric volume. Such small EBF deviation is therefore insignificant when the uncertainty of the Monte Carlo simulation is taken into account. Corresponding measurements were carried out and uncertainties compared to Monte Carlo results were within +/- 2%. Spectra of energy deposited by the backscattered electrons in dosimetric volumes with and without the lead and MOSFET were determined by Monte Carlo simulations. It was found that in both cases, when the MOSFET body is either present or absent in the simulation, deviations of electron energy spectra with and without the lead decrease with an increase of the electron beam energy. Moreover, the softer spectrum of the backscattered electron when lead is present can result in a reduction of the MOSFET response due to stronger
Calibration and Monte Carlo modelling of neutron long counters
Tagziria, H
2000-01-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...
Response of the steam generator VVER 1000 to a steam line break
International Nuclear Information System (INIS)
Novotny, J.; Novotny, J. Jr.
2003-01-01
Dynamic effects of a steam line break in the weld of the steam pipe and the steam collector on the steam generator system are analyzed. Modelling of a steam line break may concern two cases. The steam line without a restraint and the steam line protected by a whip restraint with viscous elements applied at the postulated break cross-section. The second case is considered. Programme SYSTUS offers a special element the stiffness and viscous damping coefficients of which may be defined as dependent on the relative displacement and velocity of its nodes respectively. A circumferential crack is simulated by a sudden decrease of longitudinal and lateral stiffness coefficients of these special SYSTUS elements to zero. The computation has shown that one can simulate the pipe to behave like completely broken during a time interval of 0,0001 s or less. These elements are used to model the whip restraint with viscous elements and viscous dampers of the GERB type as well. In the case of a whip restraint model the stiffness coefficient-displacement relation and damping coefficient - velocity relation are chosen to fit the given characteristics of the restraint. The special SYSTUS elements are used to constitute Maxwell elements modelling the elasto-plastic and viscous properties of the GERB dampers applied to the steam generator. It has been ascertained that a steam line break at the postulated weld crack between the steam pipe and the steam generator collector cannot endanger the integrity of the system even in a case of the absence of a whip restraint effect. (author)
Monte Carlo work at Argonne National Laboratory
International Nuclear Information System (INIS)
Gelbard, E.M.; Prael, R.E.
1974-01-01
A simple model of the Monte Carlo process is described and a (nonlinear) recursion relation between fission sources in successive generations is developed. From the linearized form of these recursion relations, it is possible to derive expressions for the mean square coefficients of error modes in the iterates and for correlation coefficients between fluctuations in successive generations. First-order nonlinear terms in the recursion relation are analyzed. From these nonlinear terms an expression for the bias in the eigenvalue estimator is derived, and prescriptions for measuring the bias are formulated. Plans for the development of the VIM code are reviewed, and the proposed treatment of small sample perturbations in VIM is described. 6 references. (U.S.)
International Nuclear Information System (INIS)
Coveyou, R.R.
1974-01-01
The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)
An Integrated Multiperiod OPF Model with Demand Response and Renewable Generation Uncertainty
DEFF Research Database (Denmark)
Bukhsh, Waqquas Ahmed; Zhang, Chunyu; Pinson, Pierre
2015-01-01
Renewable energy sources such as wind and solar have received much attention in recent years, and large amount of renewable generation is being integrated to the electricity networks. A fundamental challenge in a power system operation is to handle the intermittent nature of the renewable...... that with small flexibility on the demand-side substantial benefits in terms of re-dispatch costs can be achieved. The proposed approach is tested on all standard IEEE test cases upto 300 buses for a wide variety of scenarios....
Economic and environmental balancing in response to NEPA for electric power generating plants
International Nuclear Information System (INIS)
Bender, M.
1976-01-01
Discussion of principles that can provide guidance in responding to the National Environmental Policy Act (NEPA) in the planning of electric power generating plants. The environmental assessment procedure described is initiated by considering alternative decisions in concern for environmental assessment. Having defined the decision paths, the assessment proceeds in a four-phase sequence: Correlation of the alternatives with resource and marketing restraints; screening the alternatives for environmental adequacy and specifying the needed technological refinement; examination of the economics in terms of energy costs; comparing the energy cost with the environmental index and selecting the combination that best reflects the current social preference. (Auth.)
Economic and environmental balancing in response to NEPA for electric power generating plants
Energy Technology Data Exchange (ETDEWEB)
Bender, M [Oak Ridge National Lab., Tenn. (USA)
1976-03-01
A discussion is given of principles that can provide guidance in responding to the National Environmental Policy Act (NEPA) in the planning of electric power generating plants. The environmental assessment procedure described is initiated by considering alternative decisions in concern for environmental assessment. Having defined the decision paths, the assessment proceeds in a four-phase sequence: correlation of the alternatives with resource and marketing restraints; screening the alternatives for environmental adequacy and specifying the needed technological refinement; examination of the economics in terms of energy costs; comparing the energy cost with the environmental index and selecting the combination that best reflects the current social preference.
Generation of shrimp waste-based dispersant for oil spill response.
Zhang, Kedong; Zhang, Baiyu; Song, Xing; Liu, Bo; Jing, Liang; Chen, Bing
2018-04-01
In this study, shrimp waste was enzymatically hydrolyzed to generate a green dispersant and the product was tested for crude oil dispersion in seawater. The hydrolysis process was first optimized based on the dispersant effectiveness (DE) of the product. The functional properties of the product were identified including stability, critical micelle concentration, and emulsification activity. Water was confirmed as a good solvent for dispersant generation when compared with three chemical solvents. The effects of salinity, mixing energy, and temperature on the dispersion of the Alaska North Slope (ANS) crude oil were examined. Microtox acute toxicity test was also conducted to evaluate the toxicity of the produced dispersant. In addition, DE of the product on three different types of crude oil, including ANS crude oil, Prudhoe Bay crude oil (PBC), and Arabian Light crude oil (ALC) was compared with that of the Corexit 9500, respectively. The research output could lead to a promising green solution to the oil spill problem and might result in many other environmental applications.