WorldWideScience

Sample records for programmed sample calculations

  1. Program for TI programmable 59 calculator for calculation of 3H concentration of water samples

    International Nuclear Information System (INIS)

    Hussain, S.D.; Asghar, G.

    1982-09-01

    A program has been developed for TI Programmable 59 Calculator of Texas Instruments Inc. to calculate from the observed parameters such as count rate etc. the 3 H (tritium) concentration of water samples processed with/without prior electrolytic enrichment. Procedure to use the program has been described in detail. A brief description of the laboratory treatment of samples and the mathematical equations used in the calculations have been given. (orig./A.B.)

  2. 10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle is...

  3. SNS Sample Activation Calculator Flux Recommendations and Validation

    Energy Technology Data Exchange (ETDEWEB)

    McClanahan, Tucker C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Gallmeier, Franz X. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Lu, Wei [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS)

    2015-02-01

    The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.

  4. A semi-empirical approach to calculate gamma activities in environmental samples

    International Nuclear Information System (INIS)

    Palacios, D.; Barros, H.; Alfonso, J.; Perez, K.; Trujillo, M.; Losada, M.

    2006-01-01

    We propose a semi-empirical method to calculate radionuclide concentrations in environmental samples without the use of reference material and avoiding the typical complexity of Monte-Carlo codes. The calculation of total efficiencies was carried out from a relative efficiency curve (obtained from the gamma spectra data), and the geometric (simulated by Monte-Carlo), absorption, sample and intrinsic efficiencies at energies between 130 and 3000 keV. The absorption and sample efficiencies were determined from the mass absorption coefficients, obtained by the web program XCOM. Deviations between computed results and measured efficiencies for the RGTh-1 reference material are mostly within 10%. Radionuclide activities in marine sediment samples calculated by the proposed method and by the experimental relative method were in satisfactory agreement. The developed method can be used for routine environmental monitoring when efficiency uncertainties of 10% can be sufficient.(Author)

  5. User's Guide to Handlens - A Computer Program that Calculates the Chemistry of Minerals in Mixtures

    Science.gov (United States)

    Eberl, D.D.

    2008-01-01

    HandLens is a computer program, written in Excel macro language, that calculates the chemistry of minerals in mineral mixtures (for example, in rocks, soils and sediments) for related samples from inputs of quantitative mineralogy and chemistry. For best results, the related samples should contain minerals having the same chemical compositions; that is, the samples should differ only in the proportions of minerals present. This manual describes how to use the program, discusses the theory behind its operation, and presents test results of the program's accuracy. Required input for HandLens includes quantitative mineralogical data, obtained, for example, by RockJock analysis of X-ray diffraction (XRD) patterns, and quantitative chemical data, obtained, for example, by X-ray florescence (XRF) analysis of the same samples. Other quantitative data, such as sample depth, temperature, surface area, also can be entered. The minerals present in the samples are selected from a list, and the program is started. The results of the calculation include: (1) a table of linear coefficients of determination (r2's) which relate pairs of input data (for example, Si versus quartz weight percents); (2) a utility for plotting all input data, either as pairs of variables, or as sums of up to eight variables; (3) a table that presents the calculated chemical formulae for minerals in the samples; (4) a table that lists the calculated concentrations of major, minor, and trace elements in the various minerals; and (5) a table that presents chemical formulae for the minerals that have been corrected for possible systematic errors in the mineralogical and/or chemical analyses. In addition, the program contains a method for testing the assumption of constant chemistry of the minerals within a sample set.

  6. HP-67 calculator programs for thermodynamic data and phase diagram calculations

    International Nuclear Information System (INIS)

    Brewer, L.

    1978-01-01

    This report is a supplement to a tabulation of the thermodynamic and phase data for the 100 binary systems of Mo with the elements from H to Lr. The calculations of thermodynamic data and phase equilibria were carried out from 5000 0 K to low temperatures. This report presents the methods of calculation used. The thermodynamics involved is rather straightforward and the reader is referred to any advanced thermodynamic text. The calculations were largely carried out using an HP-65 programmable calculator. In this report, those programs are reformulated for use with the HP-67 calculator; great reduction in the number of programs required to carry out the calculation results

  7. TableSim--A program for analysis of small-sample categorical data.

    Science.gov (United States)

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  8. Composition calculations by the KARATE code system for the spent-fuel samples from the Novovoronezh reactor

    International Nuclear Information System (INIS)

    Hordosy, G.

    2006-01-01

    KARATE is a code system developed in KFKI AERI. It is routinely used for core calculation. Its depletion module are now tested against the radiochemical measurements of spent fuel samples from the Novovoronezh Unit IV, performed in RIAR, Dimitrovgrad. Due to the insufficient knowledge of operational history of the unit, the irradiation history of the samples was taken from formerly published Russian calculations. The calculation of isotopic composition was performed by the MULTICEL module of program system. The agreement between the calculated and measured values of the concentration of the most important actinides and fission products is investigated (Authors)

  9. HEINBE; the calculation program for helium production in beryllium under neutron irradiation

    International Nuclear Information System (INIS)

    Shimakawa, Satoshi; Ishitsuka, Etsuo; Sato, Minoru

    1992-11-01

    HEINBE is a program on personal computer for calculating helium production in beryllium under neutron irradiation. The program can also calculate the tritium production in beryllium. Considering many nuclear reactions and their multi-step reactions, helium and tritium productions in beryllium materials irradiated at fusion reactor or fission reactor may be calculated with high accuracy. The calculation method, user's manual, calculated examples and comparison with experimental data were described. This report also describes a neutronics simulation method to generate additional data on swelling of beryllium, 3,000-15,000 appm helium range, for end-of-life of the proposed design for fusion blanket of the ITER. The calculation results indicate that helium production for beryllium sample doped lithium by 50 days irradiation in the fission reactor, such as the JMTR, could be achieved to 2,000-8,000 appm. (author)

  10. FORTRAN program for calculating liquid-phase and gas-phase thermal diffusion column coefficients

    International Nuclear Information System (INIS)

    Rutherford, W.M.

    1980-01-01

    A computer program (COLCO) was developed for calculating thermal diffusion column coefficients from theory. The program, which is written in FORTRAN IV, can be used for both liquid-phase and gas-phase thermal diffusion columns. Column coefficients for the gas phase can be based on gas properties calculated from kinetic theory using tables of omega integrals or on tables of compiled physical properties as functions of temperature. Column coefficients for the liquid phase can be based on compiled physical property tables. Program listings, test data, sample output, and users manual are supplied for appendices

  11. Means and method of sampling flow related variables from a waterway in an accurate manner using a programmable calculator

    Science.gov (United States)

    Rand E. Eads; Mark R. Boolootian; Steven C. [Inventors] Hankin

    1987-01-01

    Abstract - A programmable calculator is connected to a pumping sampler by an interface circuit board. The calculator has a sediment sampling program stored therein and includes a timer to periodically wake up the calculator. Sediment collection is controlled by a Selection At List Time (SALT) scheme in which the probability of taking a sample is proportional to its...

  12. Computer programs for lattice calculations

    International Nuclear Information System (INIS)

    Keil, E.; Reich, K.H.

    1984-01-01

    The aim of the workshop was to find out whether some standardisation could be achieved for future work in this field. A certain amount of useful information was unearthed, and desirable features of a ''standard'' program emerged. Progress is not expected to be breathtaking, although participants (practically from all interested US, Canadian and European accelerator laboratories) agreed that the mathematics of the existing programs is more or less the same. Apart from the NIH (not invented here) effect, there is a - to quite some extent understandable - tendency to stay with a program one knows and to add to it if unavoidable rather than to start using a new one. Users of the well supported program TRANSPORT (designed for beam line calculations) would prefer to have it fully extended for lattice calculations (to some extent already possible now), while SYNCH users wish to see that program provided with a user-friendly input, rather than spending time and effort for mastering a new program

  13. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  14. FORTRAN programs for transient eddy current calculations using a perturbation-polynomial expansion technique

    International Nuclear Information System (INIS)

    Carpenter, K.H.

    1976-11-01

    A description is given of FORTRAN programs for transient eddy current calculations in thin, non-magnetic conductors using a perturbation-polynomial expansion technique. Basic equations are presented as well as flow charts for the programs implementing them. The implementation is in two steps--a batch program to produce an intermediate data file and interactive programs to produce graphical output. FORTRAN source listings are included for all program elements, and sample inputs and outputs are given for the major programs

  15. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  16. Calculation methods in program CCRMN

    Energy Technology Data Exchange (ETDEWEB)

    Chonghai, Cai [Nankai Univ., Tianjin (China). Dept. of Physics; Qingbiao, Shen [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    CCRMN is a program for calculating complex reactions of a medium-heavy nucleus with six light particles. In CCRMN, the incoming particles can be neutrons, protons, {sup 4}He, deuterons, tritons and {sup 3}He. the CCRMN code is constructed within the framework of the optical model, pre-equilibrium statistical theory based on the exciton model and the evaporation model. CCRMN is valid in 1{approx} MeV energy region, it can give correct results for optical model quantities and all kinds of reaction cross sections. This program has been applied in practical calculations and got reasonable results.

  17. Isochronous cyclotron closed equilibrium orbit calculation program description

    International Nuclear Information System (INIS)

    Kiyan, I.N.; Vorozhtsov, S.B.; Tarashkevich, R.

    2003-01-01

    The Equilibrium Orbit Research Program - EORP, written in C++ with the use of Visual C++ is described. The program is intended for the calculation of the particle rotation frequency and particle kinetic energy in the closed equilibrium orbits of an isochronous cyclotron, where the closed equilibrium orbits are described through the radius and particle momentum angle: r eo (θ) and φ p (θ). The program algorithm was developed on the basis of articles, lecture notes and original analytic calculations. The results of calculations by the EORP were checked and confirmed by using the results of calculations by the numerical methods. The discrepancies between the EORP results and the numerical method results for the calculations of the particle rotation frequency and particle kinetic energy are within the limits of ±1·10 -4 . The EORP results and the numerical method results for the calculations of r eo (θ) and φ p (θ) practically coincide. All this proves the accuracy of calculations by the EORP for the isochronous cyclotrons with the azimuthally varied fields. As is evident from the results of calculations, the program can be used for the calculations of both straight - sector and spiral-sector isochronous cyclotrons. (author)

  18. Some calculator programs for particle physics

    International Nuclear Information System (INIS)

    Wohl, C.G.

    1982-01-01

    Seven calculator programs that do simple chores that arise in elementary particle physics are given. LEGENDRE evaluates the Legendre polynomial series Σa/sub n/P/sub n/(x) at a series of values of x. ASSOCIATED LEGENDRE evaluates the first-associated Legendre polynomial series Σb/sub n/P/sub n/ 1 (x) at a series of values of x. CONFIDENCE calculates confidence levels for chi 2 , Gaussian, or Poisson probability distributions. TWO BODY calculates the c.m. energy, the initial- and final-state c.m. momenta, and the extreme values of t and u for a 2-body reaction. ELLIPSE calculates coordinates of points for drawing an ellipse plot showing the kinematics of a 2-body reaction or decay. DALITZ RECTANGULAR calculates coordinates of points on the boundary of a rectangular Dalitz plot. DALITZ TRIANGULAR calculates coordinates of points on the boundary of a triangular Dalitz plot. There are short versions of CONFIDENCE (EVEN N and POISSON) that calculate confidence levels for the even-degree-of-freedom-chi 2 and the Poisson cases, and there is a short version of TWO BODY (CM) that calculates just the c.m. energy and initial-state momentum. The programs are written for the HP-97 calculator

  19. Calculation of the effective D-d neutron energy distribution incident on a cylindrical shell sample

    International Nuclear Information System (INIS)

    Gotoh, Hiroshi

    1977-07-01

    A method is proposed to calculate the effective energy distribution of neutrons incident on a cylindrical shell sample placed perpendicularly to the direction of the deuteron beam bombarding a deuterium metal target. The Monte Carlo method is used and the Fortran program is contained. (auth.)

  20. Program for the surface muon spectra calculation

    International Nuclear Information System (INIS)

    Arkatov, Yu.M.; Voloshchuk, V.I.; Zolenko, V.A.; Prokhorets, I.M.; Soldatov, S.A.

    1987-01-01

    Program for the ''surface'' muon spectrum calculation is described. The algorithm is based on simulation of coordinates of π-meson birth point and direction of its escape from meson-forming target (MFT) according to angular distribution with the use of Monte Carlo method. Ionization losses of π-(μ)-mesons in the target are taken into account in the program. Calculation of ''surface'' muon spectrum is performed in the range of electron energies from 150 MeV up to 1000 MeV. Spectra of π-mesons are calculated with account of ionization losses in the target and without it. Distributions over lengths of π-meson paths in MFT and contribution of separate sections of the target to pion flux at the outlet of meson channel are calculated as well. Meson-forming target for calculation can be made of any material. The program provides for the use of the MFT itself in the form of photon converter or photon converter is located in front of the target. The program is composed of 13 subprograms; 2 of them represent generators of pseudorandom numbers, distributed uniformly in the range from 0 up to 1, and numbers with Gauss distribution. Example of calculation for copper target of 3 cm length, electron beam current-1 μA, energy-300 MeV is presented

  1. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  2. Fission product inventory calculation by a CASMO/ORIGEN coupling program

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Do Heon; Kim, Jong Kyung [Hanyang University, Seoul (Korea, Republic of); Choi, Hang Bok; Roh, Gyu Hong; Jung, In Ha [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A CASMO/ORIGEN coupling utility program was developed to predict the composition of all the fission products in spent PWR fuels. The coupling program reads the CASMO output file, modifies the ORIGEN cross section library and reconstructs the ORIGEN input file at each depletion step. In ORIGEN, the burnup equation is solved for actinides and fission products based on the fission reaction rates and depletion flux of CASMO. A sample calculation has been performed using a 14 x 14 PWR fuel assembly and the results are given in this paper. 3 refs., 1 fig., 1 tab. (Author)

  3. Fission product inventory calculation by a CASMO/ORIGEN coupling program

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Do Heon; Kim, Jong Kyung [Hanyang University, Seoul (Korea, Republic of); Choi, Hang Bok; Roh, Gyu Hong; Jung, In Ha [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A CASMO/ORIGEN coupling utility program was developed to predict the composition of all the fission products in spent PWR fuels. The coupling program reads the CASMO output file, modifies the ORIGEN cross section library and reconstructs the ORIGEN input file at each depletion step. In ORIGEN, the burnup equation is solved for actinides and fission products based on the fission reaction rates and depletion flux of CASMO. A sample calculation has been performed using a 14 x 14 PWR fuel assembly and the results are given in this paper. 3 refs., 1 fig., 1 tab. (Author)

  4. HEXANN-EVALU - a Monte Carlo program system for pressure vessel neutron irradiation calculation

    International Nuclear Information System (INIS)

    Lux, Ivan

    1983-08-01

    The Monte Carlo program HEXANN and the evaluation program EVALU are intended to calculate Monte Carlo estimates of reaction rates and currents in segments of concentric angular regions around a hexagonal reactor-core region. The report describes the theoretical basis, structure and activity of the programs. Input data preparation guides and a sample problem are also included. Theoretical considerations as well as numerical experimental results suggest the user a nearly optimum way of making use of the Monte Carlo efficiency increasing options included in the program

  5. A program to calculate pulse transmission responses through transversely isotropic media

    Science.gov (United States)

    Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei

    2018-05-01

    We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.

  6. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  7. CREST : a computer program for the calculation of composition dependent self-shielded cross-sections

    International Nuclear Information System (INIS)

    Kapil, S.K.

    1977-01-01

    A computer program CREST for the calculation of the composition and temperature dependent self-shielded cross-sections using the shielding factor approach has been described. The code includes the editing and formation of the data library, calculation of the effective shielding factors and cross-sections, a fundamental mode calculation to generate the neutron spectrum for the system which is further used to calculate the effective elastic removal cross-sections. Studies to explore the sensitivity of reactor parameters to changes in group cross-sections can also be carried out by using the facility available in the code to temporarily change the desired constants. The final self-shielded and transport corrected group cross-sections can be dumped on cards or magnetic tape in a suitable form for their direct use in a transport or diffusion theory code for detailed reactor calculations. The program is written in FORTRAN and can be accommodated in a computer with 32 K work memory. The input preparation details, sample problem and the listing of the program are given. (author)

  8. Isochronous Cyclotron Closed Equilibrium Orbit Calculation Program Description

    CERN Document Server

    Kian, I N; Tarashkevich, R

    2003-01-01

    The Equilibrium Orbit Research Program - EORP, written in C++ with the use of Visual C++ is described. The program is intended for the calculation of the particle rotation frequency and particle kinetic energy in the closed equilibrium orbits of an isochronous cyclotron, where the closed equilibrium orbits are described through the radius and particle momentum angle: r_{eo}(\\theta) and \\varphi_{p}(\\theta). The program algorithm was developed on the basis of articles, lecture notes and original analytic calculations. The results of calculations by the EORP were checked and confirmed by using the results of calculations by the numerical methods. The discrepancies between the EORP results and the numerical method results for the calculations of the particle rotation frequency and particle kinetic energy are within the limits of \\pm1\\cdot10^{-4}. The EORP results and the numerical method results for the calculations of r_{eo}(\\theta) and \\varphi_{p}(\\theta) practically coincide. All this proves the accuracy of ca...

  9. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  10. Computer Programs for Calculating and Plotting the Stability Characteristics of a Balloon Tethered in a Wind

    Science.gov (United States)

    Bennett, R. M.; Bland, S. R.; Redd, L. T.

    1973-01-01

    Computer programs for calculating the stability characteristics of a balloon tethered in a steady wind are presented. Equilibrium conditions, characteristic roots, and modal ratios are calculated for a range of discrete values of velocity for a fixed tether-line length. Separate programs are used: (1) to calculate longitudinal stability characteristics, (2) to calculate lateral stability characteristics, (3) to plot the characteristic roots versus velocity, (4) to plot the characteristic roots in root-locus form, (5) to plot the longitudinal modes of motion, and (6) to plot the lateral modes for motion. The basic equations, program listings, and the input and output data for sample cases are presented, with a brief discussion of the overall operation and limitations. The programs are based on a linearized, stability-derivative type of analysis, including balloon aerodynamics, apparent mass, buoyancy effects, and static forces which result from the tether line.

  11. 46 CFR 280.11 - Example of calculation and sample report.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Example of calculation and sample report. 280.11 Section... OPERATORS § 280.11 Example of calculation and sample report. (a) Example of calculation. The provisions of this part may be illustrated by the following example: Company A operates several vessels engaged in...

  12. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

  13. A nonproprietary, nonsecret program for calculating Stirling cryocoolers

    Science.gov (United States)

    Martini, W. R.

    1985-01-01

    A design program for an integrated Stirling cycle cryocooler was written on an IBM-PC computer. The program is easy to use and shows the trends and itemizes the losses. The calculated results were compared with some measured performance values. The program predicts somewhat optimistic performance and needs to be calibrated more with experimental measurements. Adding a multiplier to the friction factor can bring the calculated rsults in line with the limited test results so far available. The program is offered as a good framework on which to build a truly useful design program for all types of cryocoolers.

  14. 40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Calculations II... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. II Appendix II to Part 600—Sample Fuel Economy Calculations (a) This sample fuel economy calculation is applicable to...

  15. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  16. ROBOT3: a computer program to calculate the in-pile three-dimensional bowing of cylindrical fuel rods (AWBA Development Program)

    International Nuclear Information System (INIS)

    Kovscek, S.E.; Martin, S.E.

    1982-10-01

    ROBOT3 is a FORTRAN computer program which is used in conjunction with the CYGRO5 computer program to calculate the time-dependent inelastic bowing of a fuel rod using an incremental finite element method. The fuel rod is modeled as a viscoelastic beam whose material properties are derived as perturbations of the CYGRO5 axisymmetric model. Fuel rod supports are modeled as displacement, force, or spring-type nodal boundary conditions. The program input is described and a sample problem is given

  17. Newnes circuit calculations pocket book with computer programs

    CERN Document Server

    Davies, Thomas J

    2013-01-01

    Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.

  18. An adaptive sampling scheme for deep-penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Ji, Zhicheng; Pei, Lucheng

    2013-01-01

    As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree

  19. Elementary function calculation programs for the central processor-6

    International Nuclear Information System (INIS)

    Dobrolyubov, L.V.; Ovcharenko, G.A.; Potapova, V.A.

    1976-01-01

    Subprograms of elementary functions calculations are given for the central processor (CP AS-6). A procedure is described to obtain calculated formulae which represent the elementary functions as a polynomial. Standard programs for random numbers are considered. All the programs described are based upon the algorithms of respective programs for BESM computer

  20. Air and smear sample calculational tool for Fluor Hanford Radiological control

    International Nuclear Information System (INIS)

    BAUMANN, B.L.

    2003-01-01

    A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF--13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alpha and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample

  1. Monte Carlo sampling on technical parameters in criticality and burn-up-calculations

    International Nuclear Information System (INIS)

    Kirsch, M.; Hannstein, V.; Kilger, R.

    2011-01-01

    The increase in computing power over the recent years allows for the introduction of Monte Carlo sampling techniques for sensitivity and uncertainty analyses in criticality safety and burn-up calculations. With these techniques it is possible to assess the influence of a variation of the input parameters within their measured or estimated uncertainties on the final value of a calculation. The probabilistic result of a statistical analysis can thus complement the traditional method of figuring out both the nominal (best estimate) and the bounding case of the neutron multiplication factor (k eff ) in criticality safety analyses, e.g. by calculating the uncertainty of k eff or tolerance limits. Furthermore, the sampling method provides a possibility to derive sensitivity information, i.e. it allows figuring out which of the uncertain input parameters contribute the most to the uncertainty of the system. The application of Monte Carlo sampling methods has become a common practice in both industry and research institutes. Within this approach, two main paths are currently under investigation: the variation of nuclear data used in a calculation and the variation of technical parameters such as manufacturing tolerances. This contribution concentrates on the latter case. The newly developed SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is introduced. It defines an interface to the well established GRS tool for sensitivity and uncertainty analyses SUSA, that provides the necessary statistical methods for sampling based analyses. The interfaced codes are programs that are used to simulate aspects of the nuclear fuel cycle, such as the criticality safety analysis sequence CSAS5 of the SCALE code system, developed by Oak Ridge National Laboratories, or the GRS burn-up system OREST. In the following, first the implementation of the SUnCISTT will be presented, then, results of its application in an exemplary evaluation of the neutron

  2. Sampling of Stochastic Input Parameters for Rockfall Calculations and for Structural Response Calculations Under Vibratory Ground Motion

    International Nuclear Information System (INIS)

    M. Gross

    2004-01-01

    The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the

  3. GoSam. A program for automated one-loop calculations

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, G. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Greiner, N.; Heinrich, G.; Reiter, T. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, G. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, P. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, G. [City Univ. of New York, NY (United States). New York City College of Technology; Tramontano, F. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples. (orig.)

  4. GoSam. A program for automated one-loop calculations

    International Nuclear Information System (INIS)

    Cullen, G.; Greiner, N.; Heinrich, G.; Reiter, T.; Luisoni, G.

    2011-11-01

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples. (orig.)

  5. GoSam: A program for automated one-loop calculations

    International Nuclear Information System (INIS)

    Cullen, G; Greiner, N; Heinrich, G; Mastrolia, P; Reiter, T; Luisoni, G; Ossola, G; Tramontano, F

    2012-01-01

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples.

  6. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  7. A program for the Calculation of the Correlated Colour Temperature. Application for Characterising Colour Changes in Glasses

    International Nuclear Information System (INIS)

    Garcia Rosillo, F.; Balenzategui, J. L.

    2000-01-01

    The purpose of this work is to present a program for the calculation of the Correlated Colour Temperature (CCT) of any source of radiation. The methodology of calculating the colour coordinates and the corresponding CCT value of any light source is briefly reviewed. Sample program codes, including one to obtain the colour candidatures of blackbody radiators at different temperatures, have been also Ust ed. This will allow to engineers and researchers to calculate and to obtain adequate solutions for their own illuminance problems. As an application example, the change in CCT values and colour coordinates of a reference spectrum when passing through semitransparent solar photovoltaic modules designed for building integration applications has been studied. This is used to evaluate the influence on the visual comfort of the building inner rooms. Several samples of different glass models used as covers in photovoltaic modules have been tested. Results show that all the samples tested do not modify substantially the initial characteristics of the sunlight, as otherwise expected. (Author) 5 refs

  8. A versatile program for the calculation of linear accelerator room shielding.

    Science.gov (United States)

    Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M

    2018-03-22

    This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.

  9. Finite difference program for calculating hydride bed wall temperature profiles

    International Nuclear Information System (INIS)

    Klein, J.E.

    1992-01-01

    A QuickBASIC finite difference program was written for calculating one dimensional temperature profiles in up to two media with flat, cylindrical, or spherical geometries. The development of the program was motivated by the need to calculate maximum temperature differences across the walls of the Tritium metal hydrides beds for thermal fatigue analysis. The purpose of this report is to document the equations and the computer program used to calculate transient wall temperatures in stainless steel hydride vessels. The development of the computer code was motivated by the need to calculate maximum temperature differences across the walls of the hydrides beds in the Tritium Facility for thermal fatigue analysis

  10. Program for calculating multi-component high-intense ion beam transport

    International Nuclear Information System (INIS)

    Kazarinov, N.Yu.; Prejzendorf, V.A.

    1985-01-01

    The CANAL program for calculating transport of high-intense beams containing ions with different charges in a channel consisting of dipole magnets and quadrupole lenses is described. The equations determined by the method of distribution function momenta and describing coordinate variations of the local mass centres and r.m.s. transverse sizes of beams with different charges form the basis of the calculation. The program is adapted for the CDC-6500 and SM-4 computers. The program functioning is organized in the interactive mode permitting to vary the parameters of any channel element and quickly choose the optimum version in the course of calculation. The calculation time for the CDC-6500 computer for the 30-40 m channel at the integration step of 1 cm is about 1 min. The program is used for calculating the channel for the uranium ion beam injection from the collective accelerator into the heavy-ion synchrotron

  11. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  12. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices

    International Nuclear Information System (INIS)

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-01-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan

  13. Calculation of the counting efficiency for extended sources

    International Nuclear Information System (INIS)

    Korun, M.; Vidmar, T.

    2002-01-01

    A computer program for calculation of efficiency calibration curves for extended samples counted on gamma- and X ray spectrometers is described. The program calculates efficiency calibration curves for homogeneous cylindrical samples placed coaxially with the symmetry axis of the detector. The method of calculation is based on integration over the sample volume of the efficiencies for point sources measured in free space on an equidistant grid of points. The attenuation of photons within the sample is taken into account using the self-attenuation function calculated with a two-dimensional detector model. (author)

  14. Calculation of sample problems related to two-phase flow blowdown transients in pressure relief piping of a PWR pressurizer

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1984-02-01

    A method was published, based on the integral method of characteristics, by which the junction and boundary conditions needed in computation of a flow in a piping network can be accurately formulated. The method for the junction and boundary conditions formulation together with the two-step Lax-Wendroff scheme are used in a computer program; the program in turn, is used here in calculating sample problems related to the blowdown transient of a two-phase flow in the piping network downstream of a PWR pressurizer. Independent, nearly exact analytical solutions also are obtained for the sample problems. Comparison of the results obtained by the hybrid numerical technique with the analytical solutions showed generally good agreement. The good numerical accuracy shown by the results of our scheme suggest that the hybrid numerical technique is suitable for both benchmark and design calculations of PWR pressurizer blowdown transients

  15. Calculation of cosmic ray induced single event upsets: Program CRUP (Cosmic Ray Upset Program)

    Science.gov (United States)

    Shapiro, P.

    1983-09-01

    This report documents PROGRAM CRUP, COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic ray induced single-event error rates in microelectronic circuits exposed to several representative cosmic-ray environments.

  16. RADSHI: shielding calculation program for different geometries sources

    International Nuclear Information System (INIS)

    Gelen, A.; Alvarez, I.; Lopez, H.; Manso, M.

    1996-01-01

    A computer code written in pascal language for IBM/Pc is described. The program calculates the optimum thickness of slab shield for different geometries sources. The Point Kernel Method is employed, which enables the obtention of the ionizing radiation flux density. The calculation takes into account the possibility of self-absorption in the source. The air kerma rate for gamma radiation is determined, and with the concept of attenuation length through the equivalent attenuation length the shield is obtained. The scattering and the exponential attenuation inside the shield material is considered in the program. The shield materials can be: concrete, water, iron or lead. It also calculates the shield for point isotropic neutron source, using as shield materials paraffin, concrete or water. (authors). 13 refs

  17. A PC-program for the calculation of neutron flux and element contents using the ki-method of neutron activation analysis

    International Nuclear Information System (INIS)

    Boulyga, E.G.; Boulyga, S.F.

    2000-01-01

    A computer program is described, which calculates the induced activities of isotopes after irradiation in a known neutron field, thermal and epithermal neutron fluxes from the measured induced activities and from nuclear data of 2-4 monitor nuclides as well as the element concentrations in samples irradiated together with the monitors. The program was developed for operation in Windows 3.1 (or higher). The application of the program for neutron activation analysis allows to simplify the experimental procedure and to reduce the time. The program was tested by measuring different types of standard reference materials at the FRJ-2 (Research Centre, Juelich, Germany) and Triga (University Mainz, Germany) reactors. Comparison of neutron flux parameters calculated by this program with those calculated by a VAX program developed at the Research Centre, Juelich was done. The results of testing seem to be satisfactory. (author)

  18. Mass: Fortran program for calculating mass-absorption coefficients

    International Nuclear Information System (INIS)

    Nielsen, Aa.; Svane Petersen, T.

    1980-01-01

    Determinations of mass-absorption coefficients in the x-ray analysis of trace elements are an important and time consuming part of the arithmetic calculation. In the course of time different metods have been used. The program MASS calculates the mass-absorption coefficients from a given major element analysis at the x-ray wavelengths normally used in trace element determinations and lists the chemical analysis and the mass-absorption coefficients. The program is coded in FORTRAN IV, and is operational on the IBM 370/165 computer, on the UNIVAC 1110 and on PDP 11/05. (author)

  19. Weighted curve-fitting program for the HP 67/97 calculator

    International Nuclear Information System (INIS)

    Stockli, M.P.

    1983-01-01

    The HP 67/97 calculator provides in its standard equipment a curve-fit program for linear, logarithmic, exponential and power functions that is quite useful and popular. However, in more sophisticated applications, proper weights for data are often essential. For this purpose a program package was created which is very similar to the standard curve-fit program but which includes the weights of the data for proper statistical analysis. This allows accurate calculation of the uncertainties of the fitted curve parameters as well as the uncertainties of interpolations or extrapolations, or optionally the uncertainties can be normalized with chi-square. The program is very versatile and allows one to perform quite difficult data analysis in a convenient way with the pocket calculator HP 67/97

  20. Cell verification of parallel burnup calculation program MCBMPI based on MPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Wang Guanbo; Yang Xin; She Ding

    2014-01-01

    The parallel burnup calculation program MCBMPI was developed. The program was modularized. The parallel MCNP5 program MCNP5MPI was employed as neutron transport calculation module. And a composite of three solution methods was used to solve burnup equation, i.e. matrix exponential technique, TTA analytical solution, and Gauss Seidel iteration. MPI parallel zone decomposition strategy was concluded in the program. The program system only consists of MCNP5MPI and burnup subroutine. The latter achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, and data exchanging with MCNP5MPI. Also, the program was verified with the pressurized water reactor (PWR) cell burnup benchmark. The results show that it,s capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  1. Simple Calculation Programs for Biology Methods in Molecular ...

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Methods in Molecular Biology. GMAP: A program for mapping potential restriction sites. RE sites in ambiguous and non-ambiguous DNA sequence; Minimum number of silent mutations required for introducing a RE sites; Set ...

  2. Building an IDE for the Calculational Derivation of Imperative Programs

    Directory of Open Access Journals (Sweden)

    Dipak L. Chaudhari

    2015-08-01

    Full Text Available In this paper, we describe an IDE called CAPS (Calculational Assistant for Programming from Specifications for the interactive, calculational derivation of imperative programs. In building CAPS, our aim has been to make the IDE accessible to non-experts while retaining the overall flavor of the pen-and-paper calculational style. We discuss the overall architecture of the CAPS system, the main features of the IDE, the GUI design, and the trade-offs involved.

  3. Calculation program development for spinning reserve

    International Nuclear Information System (INIS)

    1979-01-01

    This study is about optimal holding of spinning reserve and optimal operation for it. It deals with the purpose and contents of the study, introduction of the spinning reserve electricity, speciality of the spinning reserve power, the result of calculation, analysis for limited method of optimum load, calculation of requirement for spinning reserve, analysis on measurement of system stability with summary, purpose of the analysis, cause of impact of the accident, basics on measurement of spinning reserve and conclusion. It has the reference on explanation for design of spinning reserve power program and using and trend about spinning reserve power in Korea.

  4. Evaluation of analytical results on DOE Quality Assessment Program Samples

    International Nuclear Information System (INIS)

    Jaquish, R.E.; Kinnison, R.R.; Mathur, S.P.; Sastry, R.

    1985-01-01

    Criteria were developed for evaluating the participants analytical results in the DOE Quality Assessment Program (QAP). Historical data from previous QAP studies were analyzed using descriptive statistical methods to determine the interlaboratory precision that had been attained. Performance criteria used in other similar programs were also reviewed. Using these data, precision values and control limits were recommended for each type of analysis performed in the QA program. Results of the analysis performed by the QAP participants on the November 1983 samples were statistically analyzed and evaluated. The Environmental Measurements Laboratory (EML) values were used as the known values and 3-sigma precision values were used as control limits. Results were submitted by 26 participating laboratories for 49 different radionuclide media combinations. The participants reported 419 results and of these, 350 or 84% were within control limits. Special attention was given to the data from gamma spectral analysis of air filters and water samples. both normal probability and box plots were prepared for each nuclide to help evaluate the distribution of the data. Results that were outside the expected range were identified and suggestions made that laboratories check calculations, and procedures on these results

  5. MONO: A program to calculate synchrotron beamline monochromator throughputs

    International Nuclear Information System (INIS)

    Chapman, D.

    1989-01-01

    A set of Fortran programs have been developed to calculate the expected throughput of x-ray monochromators with a filtered synchrotron source and is applicable to bending magnet and wiggler beamlines. These programs calculate the normalized throughput and filtered synchrotron spectrum passed by multiple element, flat un- focussed monochromator crystals of the Bragg or Laue type as a function of incident beam divergence, energy and polarization. The reflected and transmitted beam of each crystal is calculated using the dynamical theory of diffraction. Multiple crystal arrangements in the dispersive and non-dispersive mode are allowed as well as crystal asymmetry and energy or angle offsets. Filters or windows of arbitrary elemental composition may be used to filter the incident synchrotron beam. This program should be useful to predict the intensities available from many beamline configurations as well as assist in the design of new monochromator and analyzer systems. 6 refs., 3 figs

  6. 50 CFR 222.404 - Observer program sampling.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Observer program sampling. 222.404 Section 222.404 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Requirement § 222.404 Observer program sampling. (a) During the program design, NMFS would be guided by the...

  7. Calculation of Complexity Costs – An Approach for Rationalizing a Product Program

    DEFF Research Database (Denmark)

    Hansen, Christian Lindschou; Mortensen, Niels Henrik; Hvam, Lars

    2012-01-01

    This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component invento...... of a product program. These findings represent an improved decision basis for the planning of reactive and proactive initiatives of rationalizing a product program.......This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component...... inventories at the factory sites, all the way to the distribution of finished goods from distribution centers to the customers. The method proposes a step-wise approach including the analysis, quantification and allocation of product program complexity costs by the means of identifying of a number...

  8. TEMP-M program for thermal-hydraulic calculation of fast reactor fuel assemblies

    International Nuclear Information System (INIS)

    Bogoslovskaya, C.P.; Sorokin, A.P.; Tikhomirov, B.B.; Titov, P.A.; Ushakov, P.A.

    1983-01-01

    TEMP-M program (Fortran, BESM-6 computer) for thermal-hydraulic calculation of fast reactor fuel assemblies is described. Results of calculation of temperature field in a 127 fuel element assembly of BN-600, reactor accomplished according to TEMP-N program are considered as an example. Algorithm, realized in the program, enables to calculate the distributions of coolant heating, fuel element temperature (over perimeter and length) and assembly shell temperature. The distribution of coolant heating in assembly channels is determined from a solution of the balance equation system which accounts for interchannel exchange, nonadiabatic conditions on the assembly shell. The TEMP-M program gives necessary information for calculation of strength, seviceability of fast reactor core elements, serves an effective instrument for calculations when projecting reactor cores and analyzing thermal-hydraulic characteristics of operating reactor fuel assemblies

  9. Dose Rate Calculations for Rotary Mode Core Sampling Exhauster

    CERN Document Server

    Foust, D J

    2000-01-01

    This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering.

  10. Dose Rate Calculations for Rotary Mode Core Sampling Exhauster

    International Nuclear Information System (INIS)

    FOUST, D.J.

    2000-01-01

    This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering

  11. Development and benchmark verification of a parallelized Monte Carlo burnup calculation program MCBMPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Yang Xin; Wang Guanbo

    2014-01-01

    MCBMPI, a parallelized burnup calculation program, was developed. The program is modularized. Neutron transport calculation module employs the parallelized MCNP5 program MCNP5MPI, and burnup calculation module employs ORIGEN2, with the MPI parallel zone decomposition strategy. The program system only consists of MCNP5MPI and an interface subroutine. The interface subroutine achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, data exchanging with MCNP5MPI. Also, the program was verified with the Pressurized Water Reactor (PWR) cell burnup benchmark, the results showed that it's capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  12. 40 CFR Appendix III to Part 600 - Sample Fuel Economy Label Calculation

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Label Calculation...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. III Appendix III to Part 600—Sample Fuel Economy Label Calculation Suppose that a manufacturer called Mizer...

  13. Radioactive cloud dose calculations

    International Nuclear Information System (INIS)

    Healy, J.W.

    1984-01-01

    Radiological dosage principles, as well as methods for calculating external and internal dose rates, following dispersion and deposition of radioactive materials in the atmosphere are described. Emphasis has been placed on analytical solutions that are appropriate for hand calculations. In addition, the methods for calculating dose rates from ingestion are discussed. A brief description of several computer programs are included for information on radionuclides. There has been no attempt to be comprehensive, and only a sampling of programs has been selected to illustrate the variety available

  14. Computer program for calculation of ideal gas thermodynamic data

    Science.gov (United States)

    Gordon, S.; Mc Bride, B. J.

    1968-01-01

    Computer program calculates ideal gas thermodynamic properties for any species for which molecular constant data is available. Partial functions and derivatives from formulas based on statistical mechanics are provided by the program which is written in FORTRAN 4 and MAP.

  15. Computer program 'TRIO' for third order calculation of ion trajectory

    International Nuclear Information System (INIS)

    Matsuo, Takekiyo; Matsuda, Hisashi; Fujita, Yoshitaka; Wollnik, H.

    1976-01-01

    A computer program for the calculation of ion trajectory is described. This program ''TRIO'' (Third Order Ion Optics) is applicable to any ion optical system consisting of drift spaces, cylindrical or toroidal electric sector fields, homogeneous or inhomogeneous magnetic sector fields, magnetic and electrostatic Q-lenses. The influence of the fringing field is taken into consideration. A special device is introduced to the method of matrix multiplication to shorten the calculation time and the required time proves to be about 40 times shorter than the ordinary method as a result. The trajectory calculation is possible to execute with accuracy up to third order. Any one of three dispersion bases, momentum, energy, mass and energy, is possible to be selected. Full LIST of the computer program and an example are given. (auth.)

  16. GENGTC-JB: a computer program to calculate temperature distribution for cylindrical geometry capsule

    International Nuclear Information System (INIS)

    Someya, Hiroyuki; Kobayashi, Toshiki; Niimi, Motoji; Hoshiya, Taiji; Harayama, Yasuo

    1987-09-01

    In design of JMTR irradiation capsules contained specimens, a program (named GENGTC) has been generally used to evaluate temperature distributions in the capsules. The program was originally compiled by ORNL(U.S.A.) and consisted of very simple calculation methods. From the incorporated calculation methods, the program is easy to use, and has many applications to the capsule design. However, it was considered to replace original computing methods with advanced ones, when the program was checked from a standpoint of the recent computer abilities, and also to be complicated in data input. Therefore, the program was versioned up as aim to make better calculations and improve input method. The present report describes revised calculation methods and input/output guide of the version-up program. (author)

  17. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  18. Injection Molding Parameters Calculations by Using Visual Basic (VB) Programming

    Science.gov (United States)

    Tony, B. Jain A. R.; Karthikeyen, S.; Alex, B. Jeslin A. R.; Hasan, Z. Jahid Ali

    2018-03-01

    Now a day’s manufacturing industry plays a vital role in production sectors. To fabricate a component lot of design calculation has to be done. There is a chance of human errors occurs during design calculations. The aim of this project is to create a special module using visual basic (VB) programming to calculate injection molding parameters to avoid human errors. To create an injection mold for a spur gear component the following parameters have to be calculated such as Cooling Capacity, Cooling Channel Diameter, and Cooling Channel Length, Runner Length and Runner Diameter, Gate Diameter and Gate Pressure. To calculate the above injection molding parameters a separate module has been created using Visual Basic (VB) Programming to reduce the human errors. The outcome of the module dimensions is the injection molding components such as mold cavity and core design, ejector plate design.

  19. Simple Calculation Programs for Biology Immunological Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...

  20. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  1. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  2. Simple Calculation Programs for Biology Other Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Other Methods. Hemolytic potency of drugs. Raghava et al., (1994) Biotechniques 17: 1148. FPMAP: methods for classification and identification of microorganisms 16SrRNA. graphical display of restriction and fragment map of ...

  3. Development of a program for calculating the cells of heavy water

    International Nuclear Information System (INIS)

    Calabrese, R.; Lerner, A.M.; Notari, C.

    1991-01-01

    We describe here a methodology to solve the transport equation i cluster-type fuel cells found in PHWR. The general idea is inspired in the English lattice code WIMS-D4 and associated library even if we have introduced innovations both in structure and contents. The different steps involved are the resonant calculation and the subsequent construction of the microscopic self-shielded cross sections for each isotope; the calculation of macroscopic cross sections per material and the condensation to a broader energy structure; the solution of the two dimensional discretized transport equation for the whole cell. The next step is the inclusion of a burn up routine. A program, ALFIN, was written in FORTRAN 77, and prepared in a modular structure. A sample problem is tested and ALFIN results compared to those produce by WIMS-D4. The discrepancies observed are negligible, except for the resonant region where the methods are different and in some aspect WIMS is clearly in error. (author)

  4. Data calculation program for RELAP 5 code

    International Nuclear Information System (INIS)

    Silvestre, Larissa J.B.; Sabundjian, Gaiane

    2015-01-01

    As the criteria and requirements for a nuclear power plant are extremely rigid, computer programs for simulation and safety analysis are required for certifying and licensing a plant. Based on this scenario, some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors. A major difficulty in the simulation using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. The preparation of the input data leads to a very large number of mathematical operations for calculating the geometry of the components. Therefore, a mathematical friendly preprocessor was developed in order to perform these calculations and prepare RELAP5 input data. The Visual Basic for Application (VBA) combined with Microsoft EXCEL demonstrated to be an efficient tool to perform a number of tasks in the development of the program. Due to the absence of necessary information about some RELAP5 components, this work aims to make improvements to the Mathematic Preprocessor for RELAP5 code (PREREL5). For the new version of the preprocessor, new screens of some components that were not programmed in the original version were designed; moreover, screens of pre-existing components were redesigned to improve the program. In addition, an English version was provided for the new version of the PREREL5. The new design of PREREL5 contributes for saving time and minimizing mistakes made by users of the RELAP5 code. The final version of this preprocessor will be applied to Angra 2. (author)

  5. Data calculation program for RELAP 5 code

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre, Larissa J.B.; Sabundjian, Gaiane, E-mail: larissajbs@usp.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    As the criteria and requirements for a nuclear power plant are extremely rigid, computer programs for simulation and safety analysis are required for certifying and licensing a plant. Based on this scenario, some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors. A major difficulty in the simulation using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. The preparation of the input data leads to a very large number of mathematical operations for calculating the geometry of the components. Therefore, a mathematical friendly preprocessor was developed in order to perform these calculations and prepare RELAP5 input data. The Visual Basic for Application (VBA) combined with Microsoft EXCEL demonstrated to be an efficient tool to perform a number of tasks in the development of the program. Due to the absence of necessary information about some RELAP5 components, this work aims to make improvements to the Mathematic Preprocessor for RELAP5 code (PREREL5). For the new version of the preprocessor, new screens of some components that were not programmed in the original version were designed; moreover, screens of pre-existing components were redesigned to improve the program. In addition, an English version was provided for the new version of the PREREL5. The new design of PREREL5 contributes for saving time and minimizing mistakes made by users of the RELAP5 code. The final version of this preprocessor will be applied to Angra 2. (author)

  6. Method and program for complex calculation of heterogeneous reactor

    International Nuclear Information System (INIS)

    Kalashnikov, A.G.; Glebov, A.P.; Elovskaya, L.F.; Kuznetsova, L.I.

    1988-01-01

    An algorithm and the GITA program for complex one-dimensional calculation of a heterogeneous reactor which permits to conduct calculations for the reactor and its cell simultaneously using the same algorithm are described. Multigroup macrocross sections for reactor zones in the thermal energy range are determined according to the technique for calculating a cell with complicate structure and then the continuous multi group calculation of the reactor in the thermal energy range and in the range of neutron thermalization is made. The kinetic equation is solved using the Pi- and DSn- approximations [fr

  7. Calculator Programming Engages Visual and Kinesthetic Learners

    Science.gov (United States)

    Tabor, Catherine

    2014-01-01

    Inclusion and differentiation--hallmarks of the current educational system--require a paradigm shift in the way that educators run their classrooms. This article enumerates the need for techno-kinesthetic, visually based activities and offers an example of a calculator-based programming activity that addresses that need. After discussing the use…

  8. MP.EXE Microphone pressure sensitivity calibration calculation program

    DEFF Research Database (Denmark)

    Rasmussen, Knud

    1999-01-01

    MP.EXE is a program which calculates the pressure sensitivity of LS1 microphones as defined in IEC 61094-1, based on measurement results performed as laid down in IEC 61094-2.A very early program was developed and written by K. Rasmussen. The code of the present heavily extended version is writte...... by E.S. Olsen.The present manual is written by K.Rasmussen and E.S. Olsen....

  9. Program GWPROB: Calculation of inflow to groundwater measuring points during sampling

    International Nuclear Information System (INIS)

    Kaleris, V.

    1990-01-01

    The program GWPROB was developed by the DFG task group for modelling of large-area heat and pollutant transport in groundwater at the Institute of Hydrological Engineering, Hydraulics and Groundwater Department. The project was funded by the Deutsche Forschungsgemeinschaft. (BBR) [de

  10. NLOM - a program for nonlocal optical model calculations

    International Nuclear Information System (INIS)

    Kim, B.T.; Kyum, M.C.; Hong, S.W.; Park, M.H.; Udagawa, T.

    1992-01-01

    A FORTRAN program NLOM for nonlocal optical model calculations is described. It is based on a method recently developed by Kim and Udagawa, which utilizes the Lanczos technique for solving integral equations derived from the nonlocal Schroedinger equation. (orig.)

  11. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    Science.gov (United States)

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p 30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  12. Determination of the burn-up of TRIGA fuel elements by calculation with new TRIGLAV program

    International Nuclear Information System (INIS)

    Zagar, T.; Ravnik, M.

    1996-01-01

    The results of fuel element burn-up calculations with new TRIGLAV program are presented. TRIGLAV program uses two dimensional model. Results of calculation are compared to results calculated with program, which uses one dimensional model. The results of fuel element burn-up measurements with reactivity method are presented and compared with the calculated results. (author)

  13. How to Deal with FFT Sampling Influences on ADEV Calculations

    National Research Council Canada - National Science Library

    Chang, Po-Cheng

    2007-01-01

    ...) values while the numerical integration is used for the time and frequency (T&F) conversion. These ADEV errors occur because parts of the FFT sampling have no contributions to the ADEV calculation...

  14. Wilsonville wastewater sampling program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1983-10-01

    As part of its contrast to design, build and operate the SRC-1 Demonstration Plant in cooperation with the US Department of Energy (DOE), International Coal Refining Company (ICRC) was required to collect and evaluate data related to wastewater streams and wastewater treatment procedures at the SRC-1 Pilot Plant facility. The pilot plant is located at Wilsonville, Alabama and is operated by Catalytic, Inc. under the direction of Southern Company Services. The plant is funded in part by the Electric Power Research Institute and the DOE. ICRC contracted with Catalytic, Inc. to conduct wastewater sampling. Tasks 1 through 5 included sampling and analysis of various wastewater sources and points of different steps in the biological treatment facility at the plant. The sampling program ran from May 1 to July 31, 1982. Also included in the sampling program was the generation and analysis of leachate from SRC product using standard laboratory leaching procedures. For Task 6, available plant wastewater data covering the period from February 1978 to December 1981 was analyzed to gain information that might be useful for a demonstration plant design basis. This report contains a tabulation of the analytical data, a summary tabulation of the historical operating data that was evaluated and comments concerning the data. The procedures used during the sampling program are also documented.

  15. ANA - a program for evaluation of gamma spectra from environmental samples

    International Nuclear Information System (INIS)

    Mishev, P.

    1993-01-01

    The program aims at for evaluation of gamma spectra, collected in different multichannel analyzers. It provides file format conversion from most popular file spectra formats. The program includes: spectra visualization; energy and shape calibration; efficiency calibration; automatic peak search; resolving of multiplets and peak calculations, based on program KATOK; isotope library; isotope identification and activity calculations. Three types of efficiency calibrations are possible: spline approximation; two branches logarithmic approximation; and polynomial approximation based on orthonormal polynomials. The suggestions of the International Atomic Energy Agency were taken into account in development of the algorithms. The program allows batch spectra processing appropriate for routine tasks and user controlled evaluations. Calculations of lower detection limits of some user defined isotopes are also possible. The program calculates precisely the statistical uncertainties of the final results. The error sources taken into account are: standard source activity errors, efficiency approximation errors and current measurement errors. (author)

  16. Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review

    Science.gov (United States)

    Miao, Yinglong; McCammon, J. Andrew

    2016-01-01

    Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631

  17. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Programming PHREEQC calculations with C++ and Python a comparative study

    Science.gov (United States)

    Charlton, Scott R.; Parkhurst, David L.; Muller, Mike

    2011-01-01

    The new IPhreeqc module provides an application programming interface (API) to facilitate coupling of other codes with the U.S. Geological Survey geochemical model PHREEQC. Traditionally, loose coupling of PHREEQC with other applications required methods to create PHREEQC input files, start external PHREEQC processes, and process PHREEQC output files. IPhreeqc eliminates most of this effort by providing direct access to PHREEQC capabilities through a component object model (COM), a library, or a dynamically linked library (DLL). Input and calculations can be specified through internally programmed strings, and all data exchange between an application and the module can occur in computer memory. This study compares simulations programmed in C++ and Python that are tightly coupled with IPhreeqc modules to the traditional simulations that are loosely coupled to PHREEQC. The study compares performance, quantifies effort, and evaluates lines of code and the complexity of the design. The comparisons show that IPhreeqc offers a more powerful and simpler approach for incorporating PHREEQC calculations into transport models and other applications that need to perform PHREEQC calculations. The IPhreeqc module facilitates the design of coupled applications and significantly reduces run times. Even a moderate knowledge of one of the supported programming languages allows more efficient use of PHREEQC than the traditional loosely coupled approach.

  19. PTOLEMY, a program for heavy-ion direction-reaction calculations

    International Nuclear Information System (INIS)

    Gloeckner, D.H.; Macfarlane, M.H.; Pieper, S.C.

    1976-03-01

    Ptolemy is an IBM/360 program for the computation of nuclear elastic and direct-reaction cross sections. It carries out both optical-model fits to elastic-scattering data at one or more energies, and DWBA calculations for nucleon-transfer reactions. Ptolemy has been specifically designed for heavy-ion calculations. It is fast and does not require large amounts of core. The input is exceptionally flexible and easy to use. This report outlines the types of calculation that Ptolemy can carry out, summarizes the formulas used, and gives a detailed description of its input

  20. GRUCAL, a computer program for calculating macroscopic group constants

    International Nuclear Information System (INIS)

    Woll, D.

    1975-06-01

    Nuclear reactor calculations require material- and composition-dependent, energy averaged nuclear data to describe the interaction of neutrons with individual isotopes in material compositions of reactor zones. The code GRUCAL calculates these macroscopic group constants for given compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but will be read at the actual execution time from a separate instruction file. This allows to accomodate GRUCAL to various problems or different group constant concepts. (orig.) [de

  1. GENMOD - A program for internal dosimetry calculations

    International Nuclear Information System (INIS)

    Dunford, D.W.; Johnson, J.R.

    1987-12-01

    The computer code GENMOD was created to calculate the retention and excretion, and the integrated retention for selected radionuclides under a variety of exposure conditions. Since the creation of GENMOD new models have been developed and interfaced to GENMOD. This report describes the models now included in GENMOD, the dosimetry factors database, and gives a brief description of the GENMOD program

  2. PCRELAP5: data calculation program for RELAP 5 code

    International Nuclear Information System (INIS)

    Silvestre, Larissa Jacome Barros

    2016-01-01

    Nuclear accidents in the world led to the establishment of rigorous criteria and requirements for nuclear power plant operations by the international regulatory bodies. By using specific computer programs, simulations of various accidents and transients likely to occur at any nuclear power plant are required for certifying and licensing a nuclear power plant. Based on this scenario, some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most widely used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors in Brazil and worldwide. A major difficulty in the simulation by using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. The preparation of the input data requires a great number of mathematical operations to calculate the geometry of the components. Thus, for those calculations performance and preparation of RELAP5 input data, a friendly mathematical preprocessor was designed. The Visual Basic for Application (VBA) for Microsoft Excel demonstrated to be an effective tool to perform a number of tasks in the development of the program. In order to meet the needs of RELAP5 users, the RELAP5 Calculation Program (Programa de Calculo do RELAP5 - PCRELAP5) was designed. The components of the code were codified; all entry cards including the optional cards of each one have been programmed. In addition, an English version for PCRELAP5 was provided. Furthermore, a friendly design was developed in order to minimize the time of preparation of input data and errors committed by users. In this work, the final version of this preprocessor was successfully applied for Safety Injection System (SIS) of Angra 2. (author)

  3. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  4. Radiation damage calculations for the APT materials test program

    International Nuclear Information System (INIS)

    Corzine, R.K.; Wechsler, M.S.; Dudziak, D.J.; Ferguson, P.D.; James, M.R.

    1999-01-01

    A materials irradiation was performed at the Los Alamos Neutron Science Center (LANSCE) in the fall of 1996 and spring of 1997 in support of the Accelerator Production of Tritium (APT) program. Testing of the irradiated materials is underway. In the proposed APT design, materials in the target and blanket are to be exposed to protons and neutrons over a wide range of energies. The irradiation and testing program was undertaken to enlarge the very limited direct knowledge presently available of the effects of medium-energy protons (∼1 GeV) on the properties of engineering materials. APT candidate materials were placed in or near the LANSCE accelerator 800-MeV, 1-mA proton beam and received roughly the same proton current density in the center of the beam as would be the case for the APT facility. As a result, the proton fluences achieved in the irradiation were expected to approach the APT prototypic full-power-year values. To predict accurately the performance of materials in APT, radiation damage parameters for the materials experiment must be determined. By modeling the experiment, calculations for atomic displacement, helium and hydrogen cross sections and for proton and neutron fluences were done for representative samples in the 17A, 18A, and 18C areas. The LAHET code system (LCS) was used to model the irradiation program, LAHET 2.82 within LCS transports protons > 1 MeV, and neutrons >20 MeV. A modified version of MCNP for use in LCS, HMCNP 4A, was employed to tally neutrons of energies <20 MeV

  5. Calculation code of heterogeneity effects for analysis of small sample reactivity worth

    International Nuclear Information System (INIS)

    Okajima, Shigeaki; Mukaiyama, Takehiko; Maeda, Akio.

    1988-03-01

    The discrepancy between experimental and calculated central reactivity worths has been one of the most significant interests for the analysis of fast reactor critical experiment. Two effects have been pointed out so as to be taken into account in the calculation as the possible cause of the discrepancy; one is the local heterogeneity effect which is associated with the measurement geometry, the other is the heterogeneity effect on the distribution of the intracell adjoint flux. In order to evaluate these effects in the analysis of FCA actinide sample reactivity worth the calculation code based on the collision probability method was developed. The code can handle the sample size effect which is one of the local heterogeneity effects and also the intracell adjoint heterogeneity effect. (author)

  6. Ptolemy: a program for heavy-ion direct-reaction calculations

    International Nuclear Information System (INIS)

    Macfarlane, M.H.; Pieper, S.C.

    1978-04-01

    Ptolemy is an IBM/360 program for the computation of nuclear elastic and direct-reaction cross sections. It carries out optical-model fits to elastic-scattering data at one or more energies and for one or more combinations of projectile and target, collective model DWBA calculations of excitation processes, and finite-range DWBA calculations of nucleon-transfer reactions. It is fast and does not require large amounts of core. The input is exceptionally flexible and easy to use. The types of calculations that Ptolemy can carry out are outlined, the formulas used are summarized, and a detailed description of its input is given

  7. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Calculation of parameters of the original state of material radiation damage

    International Nuclear Information System (INIS)

    Krasnoshtanov, V.F.; Kevorkyan, Yu.R.; Eremin, Yu.P.; Belousov, G.G.

    1974-01-01

    The program ''Sample'' for evaluating the parameters of the initial state of radiation damage in samples irradiated by neutrons of different energies is described in this paper. Within the framework of this study, a program is elaborated for calculating the spectrum and density of initially knocked-on atoms in cylinder and parallelepiped-shaped samples, as well as in plates of various thickness. The model incorporated into the program is based on the Monte-Carlo method. In considering the neutron-to-atom interaction account is taken of the elastic scattering anisotropy and the process of inelastic scattering. This program is used to study the radiation damage states in iron samples irradiated by neutrons of different energies. A computer handled this program is based on sequential sampling of random values with a predetermined distribution law. The algorithm of the neutron's walk through a medium forms the basis of the ''Sample'' program. This program permits calculating, for a particular sample geometry, the initially knocked-on atom density and spectrum, as well as the density of the displacements due to the monoenergetic neutrons isotropically incident on the sample surface. The program also enables calculation of the static computation error. The block diagram of the ''Sample'' program and its text written in FORTRAN are presented. Also given is the dependence of the displacement density normalized with respect to the unit flux on the neutron energy for a parallelepiped-shaped sample. The neutron flux is determined by the number of collisions. The contribution of various energetic groups of initially knocked-on atoms into the radiation damage of a sample depending on the neutron energy is shown

  9. SHIELD 1.0: development of a shielding calculator program in diagnostic radiology

    International Nuclear Information System (INIS)

    Santos, Romulo R.; Real, Jessica V.; Luz, Renata M. da; Friedrich, Barbara Q.; Silva, Ana Maria Marques da

    2013-01-01

    In shielding calculation of radiological facilities, several parameters are required, such as occupancy, use factor, number of patients, source-barrier distance, area type (controlled and uncontrolled), radiation (primary or secondary) and material used in the barrier. The shielding design optimization requires a review of several options about the physical facility design and, mainly, the achievement of the best cost-benefit relationship for the shielding material. To facilitate the development of this kind of design, a program to calculate the shielding in diagnostic radiology was implemented, based on data and limits established by National Council on Radiation Protection and Measurements (NCRP) 147 and SVS-MS 453/98. The program was developed in C⌗ language, and presents a graphical interface for user data input and reporting capabilities. The module initially implemented, called SHIELD 1.0, refers to calculating barriers for conventional X-ray rooms. The program validation was performed by the comparison with the results of examples of shielding calculations presented in NCRP 147.

  10. Integration of auto analysis program of gamma spectrum and software and determination of element content in sample by k-zero method

    International Nuclear Information System (INIS)

    Trinh Quang Vinh; Truong Thi Hong Loan; Mai Van Nhon; Huynh Truc Phuong

    2014-01-01

    Integrating the gamma spectrum auto-analysis program with elemental analysis software by k-zero method is the objective for many researchers. This work is the first stepin building an auto analysis program of gamma spectrum, which includes modules of reading spectrum, displaying spectrum, calibrating energy of peak, smoothing spectrum, calculating peak area and determining content of elements in sample. Then, the results from the measurements of standard samples by a low level spectrometer using HPGe detector are compared to those of other gamma spectrum auto-analysis programs. (author)

  11. Controlled sample program publication No. 1: characterization of rock samples

    International Nuclear Information System (INIS)

    Ames, L.L.

    1978-10-01

    A description is presented of the methodology used and the geologic parameters measured on several rocks which are being used in round-robin laboratory and nuclide adsorption methodology experiments. Presently investigators from various laboratories are determining nuclide distribution coefficients utilizing numerous experimental techniques. Unfortunately, it appears that often the resultant data are dependent not only on the type of groundwater and rock utilized, but also on the experimentor or method used. The Controlled Sample Program is a WISAP (Waste Isolation Safety Assessment Program) attempt to resolve the apparent method and dependencies and to identify individual experimenter's bias. The rock samples characterized in an interlaboratory Kd methodology comparison program include Westerly granite, Argillaceous shale, Oolitic limestone, Sentinel Gap basalt, Conasauga shale, Climax Stock granite, anhydrite, Magenta dolomite and Culebra dolomite. Techniques used in the characterization include whole rock chemical analysis, X-ray diffraction, optical examination, electron microprobe elemental mapping, and chemical analysis of specific mineral phases. Surface areas were determined by the B.E.T. and ethylene glycol sorption methods. Cation exchange capacities were determined with 85 Sr, but were of questionable value for the high calcium rocks. A quantitative mineralogy was also estimated for each rock. Characteristics which have the potential of strongly affecting radionuclide Kd values such as the presence of sulfides, water-soluble, pH-buffering carbonates, glass, and ferrous iron were listed for each rock sample

  12. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  13. The WIPP Water Quality Sampling Program

    International Nuclear Information System (INIS)

    Uhland, D.; Morse, J.G.; Colton, D.

    1986-01-01

    The Waste Isolation Pilot Plant (WIPP), a Department of Energy facility, will be used for the underground disposal of wastes. The Water Quality Sampling Program (WQSP) is designed to obtain representative and reproducible water samples to depict accurate water composition data for characterization and monitoring programs in the vicinity of the WIPP. The WQSP is designed to input data into four major programs for the WIPP project: Geochemical Site Characterization, Radiological Baseline, Environmental Baseline, and Performance Assessment. The water-bearing units of interest are the Culebra and Magneta Dolomite Members of the Rustler Formation, units in the Dewey Lake Redbeds, and the Bell Canyon Formation. At least two chemically distinct types of water occur in the Culebra, one being a sodium/potassium chloride water and the other being a calcium/magnesium sulfate water. Water from the Culebra wells to the south of the WIPP site is distinctly fresher and tends to be of the calcium/magnesium sulfate type. Water in the Culebra in the north and around the WIPP site is distinctly fresher and tends to be of the sodium/potassium chloride type and is much higher in total dissolved solids. The program, which is currently 1 year old, will continue throughout the life of the facility as part of the Environmental Monitoring Program

  14. Calculations in cytogenetic dosimetry by means of the dosgen program

    International Nuclear Information System (INIS)

    Garcia Lima, O.; Zerquera, J.T.

    1996-01-01

    The DOSGEN program sums up the different calculations routing that are more often used in cytogenetic dosimetry. It can be implemented in a compatible IBM PC by cytogenetic experts having a basic knowledge of computing. The programs has been successfully applied using experimental data and its advantages have been acknowledge by Latin American and Asian Laboratories dealing with this medical branch. The program is written in Pascal Language and requires 42 K bytes

  15. Use of the 'DRAGON' program for the calculation of reactivity devices

    International Nuclear Information System (INIS)

    Mollerach, Ricardo; Fink, Jose

    2003-01-01

    DRAGON is a computer program developed at the Ecole Polytechnique of the University of Montreal and adopted by AECL for the transport calculations associated to reactivity devices. This report presents aspects of the implementation in NASA of the DRAGON program. Some cases of interest were evaluated. Comparisons with results of known programs as WIMS D5, and with experiments were done. a) Embalse (CANDU 6) cell without burnup and leakage. Calculations of macroscopic cross sections with WIMS and DRAGON show very good agreement with smaller differences in the thermal constants. b) Embalse fresh cell with different leakage options. c) Embalse cell with leakage and burnup. A comparison of k-infinity and k-effective with WIMS and DRAGON as a function of burnup shows that the differences ((D-W)/D) for fresh fuel are -0.17% roughly constant up to about 2500 MWd/tU, and then decrease to -0.06 % for 8500 MWd/tU. Experiments made in 1977 in ZED-2 critical facility, reported in [3], were used as a benchmark for the cell and supercell DRAGON calculations. Calculated fluxes were compared with experimental values and the agreement is so good. d) ZED-2 cell calculation. The measured buckling was used as geometric buckling. This case can be considered an experimental verification. The calculated reactivity with DRAGON is about 2 mk, and can be considered satisfactory. WIMS k-effective value is about one mk higher. e) Supercell calculations for ZED-2 vertical and horizontal tube and rod adjuster using 2D and 3D models were done. Comparisons between measured and calculated fluxes in the vicinity of the adjuster rods. Incremental cross sections for these adjusters were calculated using different options. f) ZED-2 reactor calculations with PUMA reveal a good concordance with critical heights measured in experiments. The report describes also particular features of the code and recommendations regarding its use that may be useful for new users. (author)

  16. Blow.MOD2: a program for blowdown transient calculations

    International Nuclear Information System (INIS)

    Doval, A.

    1990-01-01

    The BLOW.MOD2 program has been developed to calculate the blowdown phase in a pressurized vessel after a break/valve is opened. It is a one volume model where break height and flow area are specified. Moody critical flow model was adopted under saturation conditions for flow calculation through the break. Heat transfer from structures and internals have been taken into account. Long term depressurization results and a more complex model are compared satisfactorily. (Author)

  17. Adaptive sampling program support for expedited site characterization

    International Nuclear Information System (INIS)

    Johnson, R.

    1993-01-01

    Expedited site characterizations offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the ''real-time'' data generated by an expedited site characterization. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system for data fusion, management and display; and combined Bayesian/geostatistical methods for contamination extent estimation and sample location selection

  18. TRING: a computer program for calculating radionuclide transport in groundwater

    International Nuclear Information System (INIS)

    Maul, P.R.

    1984-12-01

    The computer program TRING is described which enables the transport of radionuclides in groundwater to be calculated for use in long term radiological assessments using methods described previously. Examples of the areas of application of the program are activity transport in groundwater associated with accidental spillage or leakage of activity, the shutdown of reactors subject to delayed decommissioning, shallow land burial of intermediate level waste and geologic disposal of high level waste. Some examples of the use of the program are given, together with full details to enable users to run the program. (author)

  19. Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method

    International Nuclear Information System (INIS)

    Pudjijanto, M.S.; Akhmad, Y.R.

    1998-01-01

    A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media

  20. NASA Lunar and Meteorite Sample Disk Program

    Science.gov (United States)

    Foxworth, Suzanne

    2017-01-01

    The Lunar and Meteorite Sample Disk Program is designed for K-12 classroom educators who work in K-12 schools, museums, libraries, or planetariums. Educators have to be certified to borrow the Lunar and Meteorite Sample Disks by attending a NASA Certification Workshop provided by a NASA Authorized Sample Disk Certifier.

  1. Study on the output from programs in calculating lattice with transverse coupling

    International Nuclear Information System (INIS)

    Xu Jianming

    1994-01-01

    SYNCH and MAD outputs in calculating lattice with coordinate rotation have been studied. The result shows that the four dispersion functions given by SYNCH output in this case are wrong. There are large discrepancies between the Twiss Parameters given by these two programs. One has to be careful in using these programs to calculate or match lattices with coordinate rotations (coupling between two transverse motions) so that to avoid wrong results

  2. MP.EXE, a Calculation Program for Pressure Reciprocity Calibration of Microphones

    DEFF Research Database (Denmark)

    Rasmussen, Knud

    1998-01-01

    A computer program is described which calculates the pressure sensitivity of microphones based on measurements of the electrical transfer impedance in a reciprocity calibration set-up. The calculations are performed according to the International Standard IEC 6194-2. In addition a number of options...

  3. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    Science.gov (United States)

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  4. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  5. Calculation of Collective Variable-based PMF by Combining WHAM with Umbrella Sampling

    International Nuclear Information System (INIS)

    Xu Wei-Xin; Li Yang; Zhang, John Z. H.

    2012-01-01

    Potential of mean force (PMF) with respect to localized reaction coordinates (RCs) such as distance is often applied to evaluate the free energy profile along the reaction pathway for complex molecular systems. However, calculation of PMF as a function of global RCs is still a challenging and important problem in computational biology. We examine the combined use of the weighted histogram analysis method and the umbrella sampling method for the calculation of PMF as a function of a global RC from the coarse-grained Langevin dynamics simulations for a model protein. The method yields the folding free energy profile projected onto a global RC, which is in accord with benchmark results. With this method rare global events would be sufficiently sampled because the biased potential can be used for restricting the global conformation to specific regions during free energy calculations. The strategy presented can also be utilized in calculating the global intra- and intermolecular PMF at more detailed levels. (cross-disciplinary physics and related areas of science and technology)

  6. Validation experience with the core calculation program karate

    International Nuclear Information System (INIS)

    Hegyi, Gy.; Hordosy, G.; Kereszturi, A.; Makai, M.; Maraczy, Cs.

    1995-01-01

    A relatively fast and easy-to-handle modular code system named KARATE-440 has been elaborated for steady-state operational calculations of VVER-440 type reactors. It is built up from cell, assembly and global calculations. In the frame of the program neutron physical and thermohydraulic process of the core at normal startup, steady and slow transient can be simulated. The verification and validation of the global code have been prepared recently. The test cases include mathematical benchmark and measurements on operating VVER-440 units. Summary of the results, such as startup parameters, boron letdown curves, radial and axial power distributions of some cycles of Paks NPP is presented. (author)

  7. Program for photon shielding calculations. Examination of approximations on irradiation geometries

    International Nuclear Information System (INIS)

    Isozumi, Yasuhito; Ishizuka, Fumihiko; Miyatake, Hideo; Kato, Takahisa; Tosaki, Mitsuo

    2004-01-01

    Penetration factors and related numerical data in 'Manual of Practical Shield Calculation of Radiation Facilities (2000)', which correspond to the irradiation geometries of point isotropic source in infinite thick material (PI), point isotropic source in finite thick material (PF) and vertical incident to finite thick material (VF), have been carefully examined. The shield calculation based on the PI geometry is usually performed with effective dose penetration factors of radioisotopes given in the 'manual'. The present work cleary shows that such a calculation may lead to an overestimate more than twice larger, especially for thick shield of concrete and water. Employing the numerical data in the 'manual', we have fabricated a simple computer program for the estimation of penetration factors and effective doses of radioisotopes in the different irradiation geometries, i.e., PI, PF and VF. The program is also available to calculate the effective dose from a set of radioisotopes in the different positions, which is necessary for the γ-ray shielding of radioisotope facilities. (author)

  8. Glass sampling program during DWPF Integrated Cold Runs

    International Nuclear Information System (INIS)

    Plodinec, M.J.

    1990-01-01

    The described glass sampling program is designed to achieve two objectives: To demonstrate Defense Waste Processing Facility (DWPF) ability to control and verify the radionuclide release properties of the glass product; To confirm DWPF's readiness to obtain glass samples during production, and SRL's readiness to analyze and test those samples remotely. The DWPF strategy for control of the radionuclide release properties of the glass product, and verification of its acceptability are described in this report. The basic approach of the test program is then defined

  9. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  10. Thermal Hydraulic Fortran Program for Steady State Calculations of Plate Type Fuel Research Reactors

    International Nuclear Information System (INIS)

    Khedr, H.

    2008-01-01

    The safety assessment of Research and Power Reactors is a continuous process over their life and that requires verified and validated codes. Power Reactor codes all over the world are well established and qualified against a real measuring data and qualified experimental facilities. These codes are usually sophisticated, require special skills and consume much more running time. On the other hand, most of the Research Reactor codes still requiring more data for validation and qualification. Therefore it is benefit for a regulatory body and the companies working in the area of Research Reactor assessment and design to have their own program that give them a quick judgment. The present paper introduces a simple one dimensional Fortran program called THDSN for steady state best estimate Thermal Hydraulic (TH) calculations of plate type fuel RRs. Beside calculating the fuel and coolant temperature distribution and pressure gradient in an average and hot channel the program calculates the safety limits and margins against the critical phenomena encountered in RR such as the burnout heat flux and the onset of flow instability. Well known TH correlations for calculating the safety parameters are used. THDSN program is verified by comparing its results for 2 and 10 MW benchmark reactors with that published in IAEA publications and good agreement is found. Also the program results are compared with those published for other programs such as PARET and TERMIC. An extension for this program is underway to cover the transient TH calculations

  11. BUCKL: a program for rapid calculation of x-ray deposition

    International Nuclear Information System (INIS)

    Cole, R.K. Jr.

    1970-07-01

    A computer program is described which has the fast execution time of exponential codes but also evaluates the effects of fluorescence and scattering. The program makes use of diffusion calculations with a buckling correction included to approximate the effects of finite transverse geometry. Theory and derivations necessary for the BUCKL code are presented, and the code results are compared with those of earlier codes for a variety of problems. Inputs and outputs of the program are described, and a FORTRAN listing is provided. Shortcomings of the program are discussed and suggestions are provided for possible future improvement. (U.S.)

  12. Development of HyPEP, A Hydrogen Production Plant Efficiency Calculation Program

    International Nuclear Information System (INIS)

    Lee, Young Jin; Park, Ji Won; Lee, Won Jae; Shin, Young Joon; Kim, Jong Ho; Hong, Sung Deok; Lee, Seung Wook; Hwang, Moon Kyu

    2007-12-01

    Development of HyPEP program for assessing the steady-state hydrogen production efficiency of the nuclear hydrogen production facilities was carried out. The main developmental aims of the HyPEP program are the extensive application of the GUI for enhanced user friendliness and the fast numerical solution scheme. These features are suitable for such calculations as the optimisation calculations. HyPEP was developed with the object-oriented programming techniques. The components of the facility was modelled as objects in a hierarchical structure where the inheritance property of the object oriented program were extensively applied. The Delphi program language which is based on the Object Pascal was used for the HyPEP development. The conservation equations for the thermal hydraulic flow network were setup and the numerical solution scheme was developed and implemented into HyPEP beta version. HyPEP beta version has been developed with working GUI and the numerical solution scheme implementation. Due to the premature end of this project the fully working version of HyPEP was not produced

  13. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  14. SHARDA - a program for sample heat, activity, reactivity and dose analysis

    International Nuclear Information System (INIS)

    Shukla, V.K.; Bajpai, Anil

    1985-01-01

    A computer program SHARDA (Sample Heat, Activity, Reactivity and Dose Analysis) has been developed for safety evaluation of Pile Irradiation Request (PIR) for various nonfissile materials in the research reactor CIRUS. The code can also be used, with minor modifications, for PIR safety evaluations for the research reactor DHRUVA, now being commissioned. Most of the data needed for such analysis like isotopic abundances, their various nuclear cross-sections, gamma radiation and shielding data have been built in the code for all nonfissile naturally occuring elements. The PIR safety evaluations can be readily carried out using this code for any sample in elemental, compound or mixture form irradiated in any location of the reactor. This report describes the calculational model and the input/output details of the code. Some earlier irradiations carried out in CIRUS have been analysed using this code and the results have been compared with available operational measurements. (author)

  15. Code Betal to calculation Alpha/Beta activities in environmental samples

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    A codes, BETAL, was developed, written in FORTRAN IV, to automatize calculations and presentations of the result of the total alpha-beta activities measurements in environmental samples. This code performs the necessary calculations for transformation the activities measured in total counts, to pCi/1., bearing in mind the efficiency of the detector used and the other necessary parameters. Further more, it appraise the standard deviation of the result, and calculus the Lower limit of detection for each measurement. This code is written in iterative way by screen-operator dialogue, and asking the necessary data to perform the calculation of the activity in each case by a screen label. The code could be executed through any screen and keyboard terminal, (whose computer accepts Fortran IV) with a printer connected to the said computer. (Author) 5 refs

  16. SCMAG series of programs for calculating superconducting dipole and quadrupole magnets

    International Nuclear Information System (INIS)

    Green, M.A.

    1974-10-01

    Programs SCMAG1, SCMAG2, SCMAG3, and SCMAG4 are a group of programs used to design and calculate the characteristics of conductor dominated superconducting dipole and quadrupole magnets. These magnets are used to bend and focus beams of high energy particles and are being used to design the superconducting magnets for the LBL ESCAR accelerator. The four programs are briefly described. (TFD)

  17. ELIPGRID-PC: A PC program for calculating hot spot probabilities

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1994-10-01

    ELIPGRID-PC, a new personal computer program has been developed to provide easy access to Singer's 1972 ELIPGRID algorithm for hot-spot detection probabilities. Three features of the program are the ability to determine: (1) the grid size required for specified conditions, (2) the smallest hot spot that can be sampled with a given probability, and (3) the approximate grid size resulting from specified conditions and sampling cost. ELIPGRID-PC also provides probability of hit versus cost data for graphing with spread-sheets or graphics software. The program has been successfully tested using Singer's published ELIPGRID results. An apparent error in the original ELIPGRID code has been uncovered and an appropriate modification incorporated into the new program

  18. SCMAG series of programs for calculating superconducting dipole and quadrupole magnets

    International Nuclear Information System (INIS)

    Green, M.A.

    1974-01-01

    A general description is given of four computer programs for calculating the characteristics of superconducting magnets used in the bending and focusing of high-energy particle beams. The programs are being used in the design of magnets for the LBL ESCAR (Experimental Superconducting Accelerator Ring) accelerator. (U.S.)

  19. Experimental verification of photon: A program for use in x-ray shielding calculations

    International Nuclear Information System (INIS)

    Brauer, E.; Thomlinson, W.

    1987-01-01

    At the National Synchrotron Light Source, a computer program named PHOTON has been developed to calculate radiation dose values around a beam line. The output from the program must be an accurate guide to beam line shielding. To test the program, a series of measurements of radiation dose were carried out using existing beam lines; the results were compared to the theoretical calculations of PHOTON. Several different scattering geometries, scattering materials, and sets of walls and shielding materials were studied. Results of the measurements allowed many advances to be made in the program, ultimately resulting in good agreement between the theory and experiment. 3 refs., 6 figs

  20. 40 CFR 600.211-08 - Sample calculation of fuel economy values for labeling.

    Science.gov (United States)

    2010-07-01

    ... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample calculation of fuel economy...

  1. Reflections on early Monte Carlo calculations

    International Nuclear Information System (INIS)

    Spanier, J.

    1992-01-01

    Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances

  2. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  3. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  4. COMPUTER PROGRAM FOR CALCULATION MICROCHANNEL HEAT EXCHANGERS FOR AIR CONDITIONING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Olga V. Olshevska

    2016-08-01

    Full Text Available Creating a computer program to calculate microchannel air condensers to reduce design time and carrying out variant calculations. Software packages for thermophysical properties of the working substance and the coolant, the correlation equation for calculating heat transfer, aerodynamics and hydrodynamics, the thermodynamic equations for the irreversible losses and their minimization in the heat exchanger were used in the process of creating. Borland Delphi 7 is used for creating software package.

  5. Development of internal dose calculation programing via food ingestion

    International Nuclear Information System (INIS)

    Kim, H. J.; Lee, W. K.; Lee, M. S.

    1998-01-01

    Most of dose for public via ingestion pathway is calculating for considering several pathways; which start from radioactive material released from a nuclear power plant to diffusion and migration. But in order to model these complicate pathways mathematically, some assumptions are essential and lots of input data related with pathways are demanded. Since there is uncertainty related with environment in these assumptions and input data, the accuracy of dose calculating result is not reliable. To reduce, therefore, these uncertain assumptions and inputs, this paper presents exposure dose calculating method using the activity of environmental sample detected in any pathway. Application of dose calculation is aim at peoples around KORI nuclear power plant and the value that is used to dose conversion factor recommended in ICRP Publ. 60

  6. Program system for calculating streaming neutron radiation field in reactor cavity

    International Nuclear Information System (INIS)

    He Zhongliang; Zhao Shu.

    1986-01-01

    The A23 neutron albedo data base based on Monte Carlo method well agrees with SAIL albedo data base. RSCAM program system, using Monte Carlo method with albedo approach, is used to calculate streaming neutron radiation field in reactor cavity and containment operating hall. The dose rate distributions calculated with RSCAM in square concrete duct well agree with experiments

  7. Activity computer program for calculating ion irradiation activation

    Science.gov (United States)

    Palmer, Ben; Connolly, Brian; Read, Mark

    2017-07-01

    A computer program, Activity, was developed to predict the activity and gamma lines of materials irradiated with an ion beam. It uses the TENDL (Koning and Rochman, 2012) [1] proton reaction cross section database, the Stopping and Range of Ions in Matter (SRIM) (Biersack et al., 2010) code, a Nuclear Data Services (NDS) radioactive decay database (Sonzogni, 2006) [2] and an ENDF gamma decay database (Herman and Chadwick, 2006) [3]. An extended version of Bateman's equation is used to calculate the activity at time t, and this equation is solved analytically, with the option to also solve by numeric inverse Laplace Transform as a failsafe. The program outputs the expected activity and gamma lines of the activated material.

  8. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  9. New sampling method in continuous energy Monte Carlo calculation for pebble bed reactors

    International Nuclear Information System (INIS)

    Murata, Isao; Takahashi, Akito; Mori, Takamasa; Nakagawa, Masayuki.

    1997-01-01

    A pebble bed reactor generally has double heterogeneity consisting of two kinds of spherical fuel element. In the core, there exist many fuel balls piled up randomly in a high packing fraction. And each fuel ball contains a lot of small fuel particles which are also distributed randomly. In this study, to realize precise neutron transport calculation of such reactors with the continuous energy Monte Carlo method, a new sampling method has been developed. The new method has been implemented in the general purpose Monte Carlo code MCNP to develop a modified version MCNP-BALL. This method was validated by calculating inventory of spherical fuel elements arranged successively by sampling during transport calculation and also by performing criticality calculations in ordered packing models. From the results, it was confirmed that the inventory of spherical fuel elements could be reproduced using MCNP-BALL within a sufficient accuracy of 0.2%. And the comparison of criticality calculations in ordered packing models between MCNP-BALL and the reference method shows excellent agreement in neutron spectrum as well as multiplication factor. MCNP-BALL enables us to analyze pebble bed type cores such as PROTEUS precisely with the continuous energy Monte Carlo method. (author)

  10. Development, application and also modern condition of the calculated program Imitator of a reactor

    International Nuclear Information System (INIS)

    Aver'yanova, S.P.; Kovel', A.I.; Mamichev, V.V.; Filimonov, P.E.

    2008-01-01

    Features of the calculated program Imitator of a reactor (IR) for WWER-1000 operation simulation are discussed. It is noted that IR application at NPP provides for the project program (BIPR-7) on-line working. This offers a new means, on the one hand, for the efficient prediction and information support of operator, on the other hand, for the verification and development of calculated scheme and neutron-physical model of the WWER-1000 projection program [ru

  11. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  12. DWPI: a computer program to calculate the inelastic scattering of pions from nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, R A; Miller, G A [Carnegie-Mellon Univ., Pittsburgh, Pa. (USA). Dept. of Physics

    1976-02-01

    Angular distributions for the inelastic scattering of pions are generated using the distorted wave impulse approximation (DWIA). The cross section for a given transition is calculated by summing a partial wave expansion. The T-matrix elements are calculated using distorted pion waves from the program PIRK, and therefore include elastic scattering to all orders. The excitation is treated in first order only. Several optical potentials and nuclear densities are available in the program. The transition form factor may be uncoupled from the ground-state density. Coulomb excitation, which interferes coherently with the strong interaction, is a program option.

  13. WAD, a program to calculate the heat produced by alpha decay

    International Nuclear Information System (INIS)

    Jarvis, R.G.; Bretzlaff, C.I.

    1982-09-01

    The FORTRAN program WAD (Watts from Alpha Decay) deals with the alpha and beta decay chains to be encountered in advanced fuel cycles for CANDU reactors. The data library covers all necessary alpha-emitting and beta-emitting nuclides and the program calculates the heat produced by alpha decay. Any permissible chain can be constructed very simply

  14. ParShield: A computer program for calculating attenuation parameters of the gamma rays and the fast neutrons

    International Nuclear Information System (INIS)

    Elmahroug, Y.; Tellili, B.; Souga, C.; Manai, K.

    2015-01-01

    Highlights: • Description of the theoretical method used by the ParShield program. • Description of the ParShield program. • Test and validation the ParShield program. - Abstract: This study aims to present a new computer program called ParShield which determines the neutron and gamma-ray shielding parameters. This program can calculate the total mass attenuation coefficients (μ t ), the effective atomic numbers (Z eff ) and the effective electron densities (N eff ) for gamma rays and it can also calculate the effective removal cross-sections (Σ R ) for fast neutrons for mixtures and compounds. The results obtained for the gamma rays by using ParShield were compared with the results calculated by the WinXcom program and the measured results. The obtained values of (Σ R ) were tested by comparing them with the measured results,the manually calculated results and with the results obtained by using MERCSFN program and an excellent agreement was found between them. The ParShield program can be used as a fast and effective tool to choose and compare the shielding materials, especially for the determination of (Z eff ) and (N eff ), there is no other programs in the literature which can calculate

  15. FISPRO: a simplified computer program for general fission product formation and decay calculations

    International Nuclear Information System (INIS)

    Jiacoletti, R.J.; Bailey, P.G.

    1979-08-01

    This report describes a computer program that solves a general form of the fission product formation and decay equations over given time steps for arbitrary decay chains composed of up to three nuclides. All fission product data and operational history data are input through user-defined input files. The program is very useful in the calculation of fission product activities of specific nuclides for various reactor operational histories and accident consequence calculations

  16. GRUCAL: a program system for the calculation of macroscopic group constants

    International Nuclear Information System (INIS)

    Woll, D.

    1984-01-01

    Nuclear reactor calculations require material- and composition-dependent, energy-averaged neutron physical data in order to decribe the interaction between neutrons and isotopes. The multigroup cross section code GRUCAL calculates these macroscopic group constants for given material compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but are read in from an instruction file. This makes it possible to adapt GRUCAL to various problems or different group constant concepts

  17. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  18. Computer program FPIP-REV calculates fission product inventory for U-235 fission

    Science.gov (United States)

    Brown, W. S.; Call, D. W.

    1967-01-01

    Computer program calculates fission product inventories and source strengths associated with the operation of U-235 fueled nuclear power reactor. It utilizes a fission-product nuclide library of 254 nuclides, and calculates the time dependent behavior of the fission product nuclides formed by fissioning of U-235.

  19. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.

  20. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    International Nuclear Information System (INIS)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon

    2014-01-01

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist

  1. Slicken 1.0: Program for calculating the orientation of shear on reactivated faults

    Science.gov (United States)

    Xu, Hong; Xu, Shunshan; Nieto-Samaniego, Ángel F.; Alaniz-Álvarez, Susana A.

    2017-07-01

    The slip vector on a fault is an important parameter in the study of the movement history of a fault and its faulting mechanism. Although there exist many graphical programs to represent the shear stress (or slickenline) orientations on faults, programs to quantitatively calculate the orientation of fault slip based on a given stress field are scarce. In consequence, we develop Slicken 1.0, a software to rapidly calculate the orientation of maximum shear stress on any fault plane. For this direct method of calculating the resolved shear stress on a planar surface, the input data are the unit vector normal to the involved plane, the unit vectors of the three principal stress axes, and the stress ratio. The advantage of this program is that the vertical or horizontal principal stresses are not necessarily required. Due to its nimble design using Java SE 8.0, it runs on most operating systems with the corresponding Java VM. The software program will be practical for geoscience students, geologists and engineers and will help resolve a deficiency in field geology, and structural and engineering geology.

  2. Guidance for establishment and implementation of field sample management programs in support of EM environmental sampling and analysis activities

    International Nuclear Information System (INIS)

    1994-01-01

    The role of the National Sample Management Program (NSMP) proposed by the Department of Energy's Office of Environmental Management (EM) is to be a resource for EM programs and for local Field Sample Management Programs (FSMPs). It will be a source of information on sample analysis and data collection within the DOE complex. The purpose of this document is to establish the suggested scope of the FSMP activities to be performed under each Operations Office, list the drivers under which the program will operate, define terms and list references. This guidance will apply only to EM sampling and analysis activities associated with project planning, contracting, laboratory selection, sample collection, sample transportation, laboratory analysis and data management

  3. Poker-camp: a program for calculating detector responses and phantom organ doses in environmental gamma fields

    International Nuclear Information System (INIS)

    Koblinger, L.

    1981-09-01

    A general description, user's manual and a sample problem are given in this report on the POKER-CAMP adjoint Monte Carlo photon transport program. Gamma fields of different environmental sources which are uniformly or exponentially distributed sources or plane sources in the air, in the soil or in an intermediate layer placed between them are simulated in the code. Calculations can be made on flux, kerma and spectra of photons at any point; and on responses of point-like, cylindrical, or spherical detectors; and on doses absorbed in anthropomorphic phantoms. (author)

  4. Measurement assurance program for LSC analyses of tritium samples

    International Nuclear Information System (INIS)

    Levi, G.D. Jr.; Clark, J.P.

    1997-01-01

    Liquid Scintillation Counting (LSC) for Tritium is done on 600 to 800 samples daily as part of a contamination control program at the Savannah River Site's Tritium Facilities. The tritium results from the LSCs are used: to release items as radiologically clean; to establish radiological control measures for workers; and to characterize waste. The following is a list of the sample matrices that are analyzed for tritium: filter paper smears, aqueous, oil, oily rags, ethylene glycol, ethyl alcohol, freon and mercury. Routine and special causes of variation in standards, counting equipment, environment, operators, counting times, samples, activity levels, etc. produce uncertainty in the LSC measurements. A comprehensive analytical process measurement assurance program such as JTIPMAP trademark has been implemented. The process measurement assurance program is being used to quantify and control many of the sources of variation and provide accurate estimates of the overall measurement uncertainty associated with the LSC measurements. The paper will describe LSC operations, process improvements, quality control and quality assurance programs along with future improvements associated with the implementation of the process measurement assurance program

  5. Programmable calculator programs to solve softwood volume and value equations.

    Science.gov (United States)

    Janet K. Ayer. Sachet

    1982-01-01

    This paper presents product value and product volume equations as programs for handheld calculators. These tree equations are for inland Douglas-fir, young-growth Douglas-fir, western white pine, ponderosa pine, and western larch. Operating instructions and an example are included.

  6. DCHAIN: A user-friendly computer program for radioactive decay and reaction chain calculations

    International Nuclear Information System (INIS)

    East, L.V.

    1994-05-01

    A computer program for calculating the time-dependent daughter populations in radioactive decay and nuclear reaction chains is described. Chain members can have non-zero initial populations and be produced from the preceding chain member as the result of radioactive decay, a nuclear reaction, or both. As presently implemented, chains can contain up to 15 members. Program input can be supplied interactively or read from ASCII data files. Time units for half-lives, etc. can be specified during data entry. Input values are verified and can be modified if necessary, before used in calculations. Output results can be saved in ASCII files in a format suitable for including in reports or other documents. The calculational method, described in some detail, utilizes a generalized form of the Bateman equations. The program is written in the C language in conformance with current ANSI standards and can be used on multiple hardware platforms

  7. An application program for fission product decay heat calculations

    International Nuclear Information System (INIS)

    Pham, Ngoc Son; Katakura, Jun-ichi

    2007-10-01

    The precise knowledge of decay heat is one of the most important factors in safety design and operation of nuclear power facilities. Furthermore, decay heat data also play an important role in design of fuel discharges, fuel storage and transport flasks, and in spent fuel management and processing. In this study, a new application program, called DHP (Decay Heat Power program), has been developed for exact decay heat summation calculations, uncertainty analysis, and for determination of the individual contribution of each fission product. The analytical methods were applied in the program without any simplification or approximation, in which all of linear and non-linear decay chains, and 12 decay modes, including ground state and meta-stable states, are automatically identified, and processed by using a decay data library and a fission yield data file, both in ENDF/B-VI format. The window interface of the program is designed with optional properties which is very easy for users to run the code. (author)

  8. The study of importance sampling in Monte-carlo calculation of blocking dips

    International Nuclear Information System (INIS)

    Pan Zhengying; Zhou Peng

    1988-01-01

    Angular blocking dips around the axis in Al single crystal of α-particles of about 2 Mev produced at a depth of 0.2 μm are calculated by a Monte-carlo simulation. The influence of the small solid angle emission of particles and the importance sampling in the solid angle emission have been investigated. By means of importance sampling, a more reasonable results with high accuracy are obtained

  9. Programs for data processing in radioimmunoassay using the HP-41C programmable calculator

    International Nuclear Information System (INIS)

    1981-09-01

    The programs described provide for analysis, with the Hewlett Packard HP-41C calculator, of counting data collected in radioimmunoassays or other related in-vitro assays. The immediate reason for their development was to assist laboratories having limited financial resources and serious problems of quality control. The programs are structured both for ''off-line'' use, with manual entry of counting data into the calculator through the keyboard, and, in a slightly altered version, for ''on-line'' use, with automatic data entry from an automatic well scintillation counter originally designed at the IAEA. Only the off-line variant of the programs is described. The programs determine from appropriate counting data the concentration of analyte in unknown specimens, and provide supplementary information about the reliability of these results and the consistency of current and past assay performance

  10. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis

    Science.gov (United States)

    Gordon, Sanford; Mcbride, Bonnie J.

    1994-01-01

    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  11. ptchg: A FORTRAN program for point-charge calculations of electric field gradients (EFGs)

    Science.gov (United States)

    Spearing, Dane R.

    1994-05-01

    ptchg, a FORTRAN program, has been developed to calculate electric field gradients (EFG) around an atomic site in crystalline solids using the point-charge direct-lattice summation method. It uses output from the crystal structure generation program Atoms as its input. As an application of ptchg, a point-charge calculation of the EFG quadrupolar parameters around the oxygen site in SiO 2 cristobalite is demonstrated. Although point-charge calculations of electric field gradients generally are limited to ionic compounds, the computed quadrupolar parameters around the oxygen site in SiO 2 cristobalite, a highly covalent material, are in good agreement with the experimentally determined values from nuclear magnetic resonance (NMR) spectroscopy.

  12. Are the program packages for molecular structure calculations really black boxes?

    Directory of Open Access Journals (Sweden)

    ANA MRAKOVIC

    2007-12-01

    Full Text Available In this communication it is shown that the widely held opinion that compact program packages for quantum–mechanical calculations of molecular structure can safely be used as black boxes is completely wrong. In order to illustrate this, the results of computations of equilibrium bond lengths, vibrational frequencies and dissociation energies for all homonuclear diatomic molecules involving the atoms from the first two rows of the Periodic Table, performed using the Gaussian program package are presented. It is demonstrated that the sensible use of the program requires a solid knowledge of quantum chemistry.

  13. 'PRIZE': A program for calculating collision probabilities in R-Z geometry

    International Nuclear Information System (INIS)

    Pitcher, H.H.W.

    1964-10-01

    PRIZE is an IBM7090 program which computes collision probabilities for systems with axial symmetry and outputs them on cards in suitable format for the PIP1 program. Its method of working, data requirements, output, running time and accuracy are described. The program has been used to compute non-escape (self-collision) probabilities of finite circular cylinders, and a table is given by which non-escape probabilities of slabs, finite and infinite circular cylinders, infinite square cylinders, cubes, spheres and hemispheres may quickly be calculated to 1/2% or better. (author)

  14. Calculation of pressure distribution in vacuum systems using a commercial finite element program

    International Nuclear Information System (INIS)

    Howell, J.; Wehrle, B.; Jostlein, H.

    1991-01-01

    The finite element method has proven to be a very useful tool for calculating pressure distributions in complex vacuum systems. A number of finite element programs have been developed for this specific task. For those who do not have access to one of these specialized programs and do not wish to develop their own program, another option is available. Any commercial finite element program with heat transfer analysis capabilities can be used to calculate pressure distributions. The approach uses an analogy between thermal conduction and gas conduction with the quantity temperature substituted for pressure. The thermal analogies for pumps, gas loads and tube conductances are described in detail. The method is illustrated for an example vacuum system. A listing of the ANSYS data input file for this example is included. 2 refs., 4 figs., 1 tab

  15. Air sampling program at the Portsmouth Gaseous Diffusion Plant

    International Nuclear Information System (INIS)

    Hulett, S.H.

    1975-01-01

    An extensive air sampling program has been developed at the Portsmouth Gaseous Diffusion Plant for monitoring the concentrations of radioactive aerosols present in the atmosphere on plantsite as well as in the environs. The program is designed to minimize exposures of employees and the environment to airborne radioactive particulates. Five different air sampling systems, utilizing either filtration or impaction, are employed for measuring airborne alpha and beta-gamma activity produced from 235 U and 234 Th, respectively. Two of the systems have particle selection capabilities: a personal sampler with a 10-mm nylon cyclone eliminates most particles larger than about 10 microns in diameter; and an Annular Kinetic Impactor collects particulates greater than 0.4 microns in diameter which have a density greater than 12-15 gm/cm 3 . A Hi-Volume Air Sampler and an Eberline Model AIM-3 Scintillation Air Monitor are used in collecting short-term samples for assessing compliance with ''ceiling'' standards or peak concentration limits. A film-sort aperture IBM card system is utilized for continuous 8-hour samples. This sampling program has proven to be both practical and effective for assuring accurate monitoring of the airborne activity associated with plant operations

  16. The Navy/NASA Engine Program (NNEP89): Interfacing the program for the calculation of complex Chemical Equilibrium Compositions (CEC)

    Science.gov (United States)

    Gordon, Sanford

    1991-01-01

    The NNEP is a general computer program for calculating aircraft engine performance. NNEP has been used extensively to calculate the design and off-design (matched) performance of a broad range of turbine engines, ranging from subsonic turboprops to variable cycle engines for supersonic transports. Recently, however, there has been increased interest in applications for which NNEP is not capable of simulating, such as the use of alternate fuels including cryogenic fuels and the inclusion of chemical dissociation effects at high temperatures. To overcome these limitations, NNEP was extended by including a general chemical equilibrium method. This permits consideration of any propellant system and the calculation of performance with dissociation effects. The new extended program is referred to as NNEP89.

  17. Gamma self-shielding correction factors calculation for aqueous bulk sample analysis by PGNAA technique

    International Nuclear Information System (INIS)

    Nasrabadi, M.N.; Mohammadi, A.; Jalali, M.

    2009-01-01

    In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.

  18. Preparation of a program for the independent verification of the brachytherapy planning systems calculations

    International Nuclear Information System (INIS)

    V Carmona, V.; Perez-Calatayud, J.; Lliso, F.; Richart Sancho, J.; Ballester, F.; Pujades-Claumarchirant, M.C.; Munoz, M.

    2010-01-01

    In this work a program is presented that independently checks for each patient the treatment planning system calculations in low dose rate, high dose rate and pulsed dose rate brachytherapy. The treatment planning system output text files are automatically loaded in this program in order to get the source coordinates, the desired calculation point coordinates and the dwell times when it is the case. The source strength and the reference dates are introduced by the user. The program allows implementing the recommendations about independent verification of the clinical brachytherapy dosimetry in a simple and accurate way, in few minutes. (Author).

  19. Comparison of the results of radiation transport calculation obtained by means of different programs

    International Nuclear Information System (INIS)

    Gorbatkov, D.V.; Kruchkov, V.P.

    1995-01-01

    Verification of calculational results of radiation transport, obtained by the known, programs and constant libraries (MCNP+ENDF/B, ANISN+HILO, FLUKA92) by means of their comparison with the precision results calculations through ROZ-6N+Sadko program constant complex and with experimental data, is carried out. Satisfactory agreement is shown with the MCNP+ENDF/B package data for the energy range of E<14 MeV. Analysis of the results derivations, obtained trough the ANISN-HILO package for E<400 MeV and the FLUKA92 programs of E<200 GeV is carried out. 25 refs., 12 figs., 3 tabs

  20. Complex of programs for calculating radiation fields outside plane protecting shields, bombarded by high-energy nucleons

    International Nuclear Information System (INIS)

    Gel'fand, E.K.; Man'ko, B.V.; Serov, A.Ya.; Sychev, B.S.

    1979-01-01

    A complex of programs for modelling various radiation situations at high energy proton accelerators is considered. The programs are divided into there main groups according to their purposes. The first group includes programs for preparing constants describing the processes of different particle interaction with a substanc The second group of programs calculates the complete function of particle distribution arising in shields under irradiation by high energy nucleons. Concrete radiation situations arising at high energy proton accelerators are calculated by means of the programs of the third group. A list of programs as well as their short characteristic are given

  1. MCFT: a program for calculating fast and thermal neutron multigroup constants

    International Nuclear Information System (INIS)

    Yang Shunhai; Sang Xinzeng

    1993-01-01

    MCFT is a program for calculating the fast and thermal neutron multigroup constants, which is redesigned from some codes for generation of thermal neutron multigroup constants and for fast neutron multigroup constants adapted on CYBER 825 computer. It uses indifferently as basic input with the evaluated nuclear data contained in the ENDF/B (US), KEDAK (Germany) and UK (United Kingdom) libraries. The code includes a section devoted to the generation of resonant Doppler broadened cross section in the framework of single-or multi-level Breit-Wigner formalism. The program can compute the thermal neutron scattering law S (α, β, T) as the input data in tabular, free gas or diffusion motion form. It can treat up to 200 energy groups and Legendre moments up to P 5 . The output consists of various reaction multigroup constants in all neutron energy range desired in the nuclear reactor design and calculation. Three options in input file can be used by the user. The output format is arbitrary and defined by user with a minimum of program modification. The program includes about 15,000 cards and 184 subroutines. FORTRAN 5 computer language is used. The operation system is under NOS 2 on computer CYBER 825

  2. TRIGLAV-W a Windows computer program package with graphical users interface for TRIGA reactor core management calculations

    International Nuclear Information System (INIS)

    Zagar, T.; Zefran, B.; Slavic, S.; Snoj, L.; Ravnik, M.

    2006-01-01

    TRIGLAV-W is a program package for reactor calculations of TRIGA Mark II research reactor cores. This program package runs under Microsoft Windows operating system and has new friendly graphical user interface (GUI). The main part of the package is the TRIGLAV code based on two dimensional diffusion approximation for flux distribution calculation. The new GUI helps the user to prepare the input files, runs the main code and displays the output files. TRIGLAV-W has a user friendly GUI also for the visualisation of the calculation results. Calculation results can be visualised using 2D and 3D coloured graphs for easy presentations and analysis. In the paper the many options of the new GUI are presented along with the results of extensive testing of the program. The results of the TRIGLAV-W program package were compared with the results of WIMS-D and MCNP code for calculations of TRIGA benchmark. TRIGLAV-W program was also tested using several libraries developed under IAEA WIMS-D Library Update Project. Additional literature and application form for TRIGLAV-W program package beta testing can be found at http://www.rcp.ijs.si/triglav/. (author)

  3. A computer program to calculate the committed dose equivalent after the inhalation of radioactivity

    International Nuclear Information System (INIS)

    Van der Woude, S.

    1989-03-01

    A growing number of people are, as part of their occupation, at risk of being exposed to radiation originating from sources inside their bodies. The quantification of this exposure is an important part of health physics. The International Commission on Radiological Protection (ICRP) developed a first-order kinetics compartmental model to determine the transport of radioactive material through the human body. The model and the parameters involved in its use, are discussed. A versatile computer program was developed to do the following after the in vivo measurement of either the organ- or whole-body activity: calculate the original amount of radioactive material which was inhaled (intake) by employing the ICRP compartmental model of the human body; compare this intake to calculated reference levels and state any action to be taken for the case under consideration; calculate the committed dose equivalent resulting from this intake. In the execution of the above-mentioned calculations, the computer program makes provision for different aerosol particle sizes and the effect of previous intakes. Model parameters can easily be changed to take the effects of, for instance, medical intervention into account. The computer program and the organization of the data in the input files are such that the computer program can be applied to any first-order kinetics compartmental model. The computer program can also conveniently be used for research on problems related to the application of the ICRP model. 18 refs., 25 figs., 5 tabs

  4. Thermal-hydraulic Fortran program for steady-state calculations of plate-type fuel research reactors

    Directory of Open Access Journals (Sweden)

    Khedr Ahmed

    2008-01-01

    Full Text Available The safety assessment of research and power reactors is a continuous process covering their lifespan and requiring verified and validated codes. Power reactor codes all over the world are well established and qualified against real measuring data and qualified experimental facilities. These codes are usually sophisticated, require special skills and consume a lot of running time. On the other hand, most research reactor codes still require much more data for validation and qualification. It is, therefore, of benefit to any regulatory body to develop its own codes for the review and assessment of research reactors. The present paper introduces a simple, one-dimensional Fortran program called THDSN for steady-state thermal-hydraulic calculations of plate-type fuel research reactors. Besides calculating the fuel and coolant temperature distributions and pressure gradients in an average and hot channel, the program calculates the safety limits and margins against the critical phenomena encountered in research reactors, such as the onset of nucleate boiling, critical heat flux and flow instability. Well known thermal-hydraulic correlations for calculating the safety parameters and several formulas for the heat transfer coefficient have been used. The THDSN program was verified by comparing its results for 2 and 10 MW benchmark reactors with those published in IAEA publications and a good agreement was found. Also, the results of the program are compared with those published for other programs, such as the PARET and TERMIC.

  5. BASIC Program for the calculation of radioactive activities

    International Nuclear Information System (INIS)

    Cortes P, A.; Tejera R, A.; Becerril V, A.

    1990-04-01

    When one makes a measure of radioactive activity with a detection system that operates with a gamma radiation detector (Ge or of NaI (Tl) detector), it is necessary to take in account parameters and correction factors that making sufficiently difficult and tedious those calculations to using a considerable time by part of the person that carries out these measures. Also, this frequently, can to take to erroneous results. In this work a computer program in BASIC language that solves this problem is presented. (Author)

  6. A microcomputer program for coupled cycle burnup calculations

    International Nuclear Information System (INIS)

    Driscoll, M.J.; Downar, T.J.; Taylor, E.L.

    1986-01-01

    A program, designated BRACC (Burnup, Reactivity, And Cycle Coupling), has been developed for fuel management scoping calculations, and coded in the BASIC language in an interactive format for use with microcomputers. BRACC estimates batch and cycle burnups for sequential reloads for a variety of initial core conditions, and permits the user to specify either reload batch properties (enrichment, burnable poison reactivity) or the target cycle burnup. Most important fuel management tactics (out-in or low-leakage loading, coastdown, variation in number of assemblies charged) can be simulated

  7. Parallelization for first principles electronic state calculation program

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Oguchi, Tamio.

    1997-03-01

    In this report we study the parallelization for First principles electronic state calculation program. The target machines are NEC SX-4 for shared memory type parallelization and FUJITSU VPP300 for distributed memory type parallelization. The features of each parallel machine are surveyed, and the parallelization methods suitable for each are proposed. It is shown that 1.60 times acceleration is achieved with 2 CPU parallelization by SX-4 and 4.97 times acceleration is achieved with 12 PE parallelization by VPP 300. (author)

  8. A computer program for unilateral renal clearance calculation by a modified Oberhausen method

    International Nuclear Information System (INIS)

    Brueggemann, G.

    1980-01-01

    A FORTAN program is presented which, on the basis of data obtained with NUKLEOPAN M, calculates the glomerular filtration rate with sup(99m)Tc-DTPA, the unilateral effective renal plasma flow with 131 I-hippuran, and the parameters for describing the isotope rephrogram (ING) with 131 I-hippuran. The results are calculated fully automatically upon entry of the data, and the results are processed and printed out. The theoretical fundamentals of ING and whole-body clearance calculation are presented as well as the methods available for unilateral clearance calculation, and the FORTAN program is described in detail. The standard values of the method are documented, as well as a comparative gamma camera study of 48 patients in order to determine the accuracy of unilateral imaging with the NUKLEOPAN M instrument, a comparison of unilateral clearances by the Oberhausen and Taplin methods, and a comparison between 7/17' plasma clearance and whole-body clearance. Problems and findings of the method are discussed. (orig./MG) [de

  9. A comparison of fitness-case sampling methods for genetic programming

    Science.gov (United States)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  10. Examinations on Applications of Manual Calculation Programs on Lung Cancer Radiation Therapy Using Analytical Anisotropic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Min; Kim, Dae Sup; Hong, Dong Ki; Back, Geum Mun; Kwak, Jung Won [Dept. of Radiation Oncology, , Seoul (Korea, Republic of)

    2012-03-15

    There was a problem with using MU verification programs for the reasons that there were errors of MU when using MU verification programs based on Pencil Beam Convolution (PBC) Algorithm with radiation treatment plans around lung using Analytical Anisotropic Algorithm (AAA). On this study, we studied the methods that can verify the calculated treatment plans using AAA. Using Eclipse treatment planning system (Version 8.9, Varian, USA), for each 57 fields of 7 cases of Lung Stereotactic Body Radiation Therapy (SBRT), we have calculated using PBC and AAA with dose calculation algorithm. By developing MU of established plans, we compared and analyzed with MU of manual calculation programs. We have analyzed relationship between errors and 4 variables such as field size, lung path distance of radiation, Tumor path distance of radiation, effective depth that can affect on errors created from PBC algorithm and AAA using commonly used programs. Errors of PBC algorithm have showned 0.2{+-}1.0% and errors of AAA have showned 3.5{+-}2.8%. Moreover, as a result of analyzing 4 variables that can affect on errors, relationship in errors between lung path distance and MU, connection coefficient 0.648 (P=0.000) has been increased and we could calculate MU correction factor that is A.E=L.P 0.00903+0.02048 and as a result of replying for manual calculation program, errors of 3.5{+-}2.8% before the application has been decreased within 0.4{+-}2.0%. On this study, we have learned that errors from manual calculation program have been increased as lung path distance of radiation increases and we could verified MU of AAA with a simple method that is called MU correction factor.

  11. Examinations on Applications of Manual Calculation Programs on Lung Cancer Radiation Therapy Using Analytical Anisotropic Algorithm

    International Nuclear Information System (INIS)

    Kim, Jung Min; Kim, Dae Sup; Hong, Dong Ki; Back, Geum Mun; Kwak, Jung Won

    2012-01-01

    There was a problem with using MU verification programs for the reasons that there were errors of MU when using MU verification programs based on Pencil Beam Convolution (PBC) Algorithm with radiation treatment plans around lung using Analytical Anisotropic Algorithm (AAA). On this study, we studied the methods that can verify the calculated treatment plans using AAA. Using Eclipse treatment planning system (Version 8.9, Varian, USA), for each 57 fields of 7 cases of Lung Stereotactic Body Radiation Therapy (SBRT), we have calculated using PBC and AAA with dose calculation algorithm. By developing MU of established plans, we compared and analyzed with MU of manual calculation programs. We have analyzed relationship between errors and 4 variables such as field size, lung path distance of radiation, Tumor path distance of radiation, effective depth that can affect on errors created from PBC algorithm and AAA using commonly used programs. Errors of PBC algorithm have showned 0.2±1.0% and errors of AAA have showned 3.5±2.8%. Moreover, as a result of analyzing 4 variables that can affect on errors, relationship in errors between lung path distance and MU, connection coefficient 0.648 (P=0.000) has been increased and we could calculate MU correction factor that is A.E=L.P 0.00903+0.02048 and as a result of replying for manual calculation program, errors of 3.5±2.8% before the application has been decreased within 0.4±2.0%. On this study, we have learned that errors from manual calculation program have been increased as lung path distance of radiation increases and we could verified MU of AAA with a simple method that is called MU correction factor.

  12. Radioimmunoassay evaluation and quality control by use of a simple computer program for a low cost desk top calculator

    International Nuclear Information System (INIS)

    Schwarz, S.

    1980-01-01

    A simple computer program for the data processing and quality control of radioimmunoassays is presented. It is written for low cost programmable desk top calculator (Hewlett Packard 97), which can be afforded by smaller laboratories. The untreated counts from the scintillation spectrometer are entered manually; the printout gives the following results: initial data, logit-log transformed calibration points, parameters of goodness of fit and of the position of the standard curve, control and unknown samples dose estimates (mean value from single dose interpolations and scatter of replicates) together with the automatic calculation of within assay variance and, by use of magnetic cards holding the control parameters of all previous assays, between assay variance. (orig.) [de

  13. Sample triage : an overview of Environment Canada's program

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, P.; Goldthorp, M.; Fingas, M. [Environment Canada, Ottawa, ON (Canada). Emergencies Science and Technology Division, Environmental Technology Centre, Science and Technology Branch

    2006-07-01

    The Chemical, biological and radiological/nuclear Research and Technology Initiative (CRTI) is a program led by Canada's Department of National Defence in an effort to improve the capability of providing technical and analytical support in the event of a terrorist-related event. This paper summarized the findings from the CRTI Sample Triage Working Group and reviewed information on Environment Canada's triage program and its' mobile sample inspection facility that was designed to help examine samples of hazardous materials in a controlled environment to minimize the risk of exposure. A sample triage program is designed to deal with administrative, health and safety issues by facilitating the safe transfer of samples to an analytical laboratory. It refers to the collation of all results including field screening information, intelligence and observations for the purpose of prioritizing and directing the sample to the appropriate laboratory for analysis. A central component of Environment Canada's Emergency Response Program has been its capacity to respond on site during an oil or chemical spill. As such, the Emergencies Science and Technology Division has acquired a new mobile sample inspection facility in 2004. It is constructed to work with a custom designed decontamination unit and Ford F450 tow vehicle. The criteria and general design of the trailer facility was described. This paper also outlined the steps taken following a spill of hazardous materials into the environment so that potentially dangerous samples could be safety assessed. Several field trials will be carried out in order to develop standard operating procedures for the mobile sample inspection facility. 6 refs., 6 figs., 4 appendices.

  14. Calculator: A Hardware Design, Math and Software Programming Project Base Learning

    Directory of Open Access Journals (Sweden)

    F. Criado

    2015-03-01

    Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.

  15. Aerosol sampling and Transport Efficiency Calculation (ASTEC) and application to surtsey/DCH aerosol sampling system: Code version 1.0: Code description and user's manual

    International Nuclear Information System (INIS)

    Yamano, N.; Brockmann, J.E.

    1989-05-01

    This report describes the features and use of the Aerosol Sampling and Transport Efficiency Calculation (ASTEC) Code. The ASTEC code has been developed to assess aerosol transport efficiency source term experiments at Sandia National Laboratories. This code also has broad application for aerosol sampling and transport efficiency calculations in general as well as for aerosol transport considerations in nuclear reactor safety issues. 32 refs., 31 figs., 7 tabs

  16. DNAStat, version 2.1--a computer program for processing genetic profile databases and biostatistical calculations.

    Science.gov (United States)

    Berent, Jarosław

    2010-01-01

    This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.

  17. 24 CFR 4001.203 - Calculation of upfront and annual mortgage insurance premiums for Program mortgages.

    Science.gov (United States)

    2010-04-01

    ... mortgage insurance premiums for Program mortgages. 4001.203 Section 4001.203 Housing and Urban Development... HOMEOWNERS PROGRAM HOPE FOR HOMEOWNERS PROGRAM Rights and Obligations Under the Contract of Insurance § 4001.203 Calculation of upfront and annual mortgage insurance premiums for Program mortgages. (a...

  18. Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.

    Science.gov (United States)

    Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A

    2015-10-26

    Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. GUIDE TO CALCULATING TRANSPORT EFFICIENCY OF AEROSOLS IN OCCUPATIONAL AIR SAMPLING SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.

    2013-11-12

    This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.

  20. Package of programs for calculating accidents involving melting of the materials in a fast-reactor vessel

    International Nuclear Information System (INIS)

    Vlasichev, G.N.

    1994-01-01

    Methods for calculating one-dimensional nonstationary temperature distribution in a system of physically coupled materials are described. Six computer programs developed for calculating accident processes for fast reactor core melt are described in the article. The methods and computer programs take into account melting, solidification, and, in some cases, vaporization of materials. The programs perform calculations for heterogeneous systems consisting of materials with arbitrary but constant composition and heat transfer conditions at material boundaries. Additional modules provide calculations of specific conditions of heat transfer between materials, the change in these conditions and configuration of the materials as a result of coolant boiling, melting and movement of the fuel and structural materials, temperature dependences of thermophysical properties of the materials, and heat release in the fuel. 11 refs., 3 figs

  1. Sample results from the interim salt disposition program macrobatch 9 tank 21H qualification samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-11-01

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Macrobatch (Salt Batch) 9 for the Interim Salt Disposition Program (ISDP). This document reports characterization data on the samples of Tank 21H.

  2. DEVELOPMENT OF PROGRAM MODULE FOR CALCULATING SPEED OF TITANIC PLASMA SEDIMENTATION IN ENVIRONMENT OF TECHNOLOGICAL GAS

    Directory of Open Access Journals (Sweden)

    S. A. Ivaschenko

    2006-01-01

    Full Text Available The program module has been developed on the basis of package of applied MATLAB programs which allows to calculate speed of coating sedimentation over the section of plasma stream taking into account magnetic field influence of a stabilizing coil, and also to correct the obtained value of sedimentation speed depending on the value of negative accelerating potential, arch current, technological gas pressure. The program resolves visualization of calculation results.

  3. Operational air sampling report

    International Nuclear Information System (INIS)

    Lyons, C.L.

    1994-03-01

    Nevada Test Site vertical shaft and tunnel events generate beta/gamma fission products. The REECo air sampling program is designed to measure these radionuclides at various facilities supporting these events. The current testing moratorium and closure of the Decontamination Facility has decreased the scope of the program significantly. Of the 118 air samples collected in the only active tunnel complex, only one showed any airborne fission products. Tritiated water vapor concentrations were very similar to previously reported levels. The 206 air samples collected at the Area-6 decontamination bays and laundry were again well below any Derived Air Concentration calculation standard. Laboratory analyses of these samples were negative for any airborne fission products

  4. Abstract of programs for nuclear reactor calculation and kinetic equations solution

    International Nuclear Information System (INIS)

    Marakazov, A.A.

    1977-01-01

    The collection includes about 50 annotations of programmes,developed in the Kurchatov Atomic Energy Institute in 1971-1976. The programmes are intended for calculating the neutron flux, for solving systems of multigroup equations in P 3 approximation, for calculating the reactor cell, for analysing the system stability, breeding ratio etc. The programme annotations are compiled according to the following diagram: 1.Programme title. 2.Computer type. 3.Physical problem. 4.Solution method. 5.Calculation limitations. 6.Characteristic computer time. 7.Programme characteristic features. 8.Bound programmes. 9.Programme state. 10.Literature allusions in the programme. 11.Required memory resourses. 12.Programming language. 13.Operation system. 14.Names of authors and place of programme adjusting

  5. Magnetic particle movement program to calculate particle paths in flow and magnetic fields

    International Nuclear Information System (INIS)

    Inaba, Toru; Sakazume, Taku; Yamashita, Yoshihiro; Matsuoka, Shinya

    2014-01-01

    We developed an analysis program for predicting the movement of magnetic particles in flow and magnetic fields. This magnetic particle movement simulation was applied to a capturing process in a flow cell and a magnetic separation process in a small vessel of an in-vitro diagnostic system. The distributions of captured magnetic particles on a wall were calculated and compared with experimentally obtained distributions. The calculations involved evaluating not only the drag, pressure gradient, gravity, and magnetic force in a flow field but also the friction force between the particle and the wall, and the calculated particle distributions were in good agreement with the experimental distributions. Friction force was simply modeled as static and kinetic friction forces. The coefficients of friction were determined by comparing the calculated and measured results. This simulation method for solving multiphysics problems is very effective at predicting the movements of magnetic particles and is an excellent tool for studying the design and application of devices. - Highlights: ●We developed magnetic particles movement program in flow and magnetic fields. ●Friction force on wall is simply modeled as static and kinetic friction force. ●This program was applied for capturing and separation of an in-vitro diagnostic system. ●Predicted particle distributions on wall were agreed with experimental ones. ●This method is very effective at predicting movements of magnetic particles

  6. ORBITALES. A program for the calculation of wave functions with an analytical central potential

    International Nuclear Information System (INIS)

    Yunta Carretero; Rodriguez Mayquez, E.

    1974-01-01

    In this paper is described the objective, basis, carrying out in FORTRAN language and use of the program ORBITALES. This program calculate atomic wave function in the case of ths analytical central potential (Author) 8 refs

  7. The Weak Link HP-41C hand-held calculator program

    Science.gov (United States)

    Ross A. Phillips; Penn A. Peters; Gary D. Falk

    1982-01-01

    The Weak Link hand-held calculator program (HP-41C) quickly analyzes a system for logging production and costs. The production equations model conventional chain saw, skidder, loader, and tandemaxle truck operations in eastern mountain areas. Production of each function of the logging system may be determined so that the system may be balanced for minimum cost. The...

  8. Program realization of mathematical model of kinetostatical calculation of flat lever mechanisms

    Directory of Open Access Journals (Sweden)

    M. A. Vasechkin

    2016-01-01

    Full Text Available Global computerization determined the dominant position of the analytical methods for the study of mechanisms. As a result, kinetostatics analysis of mechanisms using software packages is an important part of scientific and practical activities of engineers and designers. Therefore, software implementation of mathematical models kinetostatical calculating mechanisms is of practical interest. The mathematical model obtained in [1]. In the language of Turbo Pascal developed a computer procedure that calculates the forces in kinematic pairs in groups Assur (GA and a balancing force at the primary level. Before use appropriate computational procedures it is necessary to know all external forces and moments acting on the GA and to determine the inertial forces and moments of inertia forces. The process of calculations and constructions of the provisions of the mechanism can be summarized as follows. Organized cycle in which to calculate the position of an initial link of the mechanism. Calculate the position of the remaining links of the mechanism by referring to relevant procedures module DIADA in GA [2,3]. Using the graphics mode of the computer displaying on the display the position of the mechanism. The computed inertial forces and moments of inertia forces. Turning to the corresponding procedures of the module, calculated all the forces in kinematic pairs and the balancing force at the primary level. In each kinematic pair build forces and their direction with the help of simple graphical procedures. The magnitude of these forces and their direction are displayed in a special window with text mode. This work contains listings of the test programs MyTеst, is an example of using computing capabilities of the developed module. As a check on the calculation procedures of module in the program is reproduced an example of calculating the balancing forces according to the method of Zhukovsky (Zhukovsky lever.

  9. A finite element computer program for the calculation of the resonant frequencies of anisotropic materials

    International Nuclear Information System (INIS)

    Fleury, W.H.; Rosinger, H.E.; Ritchie, I.G.

    1975-09-01

    A set of computer programs for the calculation of the flexural and torsional resonant frequencies of rectangular section bars of materials of orthotropic or higher symmetry are described. The calculations are used in the experimental determination and verification of the elastic constants of anisotropic materials. The simple finite element technique employed separates the inertial and elastic properties of the beam element into station and field transfer matrices respectively. It includes the Timoshenko beam corrections for flexure and Lekhnitskii's theory for torsion-flexure coupling. The programs also calculate the vibration shapes and surface nodal contours or Chladni figures of the vibration modes. (author)

  10. Effective Dose Calculation Program (EDCP) for the usage of NORM-added consumer product.

    Science.gov (United States)

    Yoo, Do Hyeon; Lee, Jaekook; Min, Chul Hee

    2018-04-09

    The aim of this study is to develop the Effective Dose Calculation Program (EDCP) for the usage of Naturally Occurring Radioactive Material (NORM) added consumer products. The EDCP was developed based on a database of effective dose conversion coefficient and the Matrix Laboratory (MATLAB) program to incorporate a Graphic User Interface (GUI) for ease of use. To validate EDCP, the effective dose calculated with EDCP by manually determining the source region by using the GUI and that by using the reference mathematical algorithm were compared for pillow, waist supporter, eye-patch and sleeping mattress. The results show that the annual effective dose calculated with EDCP was almost identical to that calculated using the reference mathematical algorithm in most of the assessment cases. With the assumption of the gamma energy of 1 MeV and activity of 1 MBq, the annual effective doses of pillow, waist supporter, sleeping mattress, and eye-patch determined using the reference algorithm were 3.444 mSv year -1 , 2.770 mSv year -1 , 4.629 mSv year -1 , and 3.567 mSv year -1 , respectively, while those calculated using EDCP were 3.561 mSv year -1 , 2.630 mSv year -1 , 4.740 mSv year -1 , and 3.780 mSv year -1 , respectively. The differences in the annual effective doses were less than 5%, despite the different calculation methods employed. The EDCP can therefore be effectively used for radiation protection management in the context of the usage of NORM-added consumer products. Additionally, EDCP can be used by members of the public through the GUI for various studies in the field of radiation protection, thus facilitating easy access to the program. Copyright © 2018. Published by Elsevier Ltd.

  11. The transition equation of the state intensities for exciton model and the calculation program

    International Nuclear Information System (INIS)

    Yu Xian; Zheng Jiwen; Liu Guoxing; Chen Keliang

    1995-01-01

    An equation set of the exciton model is given and calculation program is developed. The process of approaching to equilibrium state has been investigated with the program for 12 C + 64 Ni reaction at energy 72 MeV

  12. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  13. ERATO - a computer program for the calculation of induced eddy-currents in three-dimensional conductive structures

    International Nuclear Information System (INIS)

    Benner, J.

    1985-10-01

    The computer code ERATO is used for the calculation of eddy-currents in three-dimensional conductive structures and their secondary magnetic field. ERATO is a revised version of the code FEDIFF, developed at IPP Garching. For the calculation the Finite-Element-Network (FEN) method is used, where the structure is simulated by an equivalent electric network. In the ERATO-code, the calculation of the finite-element discretization, the eddy-current analysis, and the final evaluation of the results are done in separate programs. So the eddy-current analysis as the central step is perfectly independent of a special geometry. For the finite-element discretization there are two so called preprocessors, which treat a torus-segment and a rectangular, flat plate. For the final evaluation postprocessors are used, by which the current-distributions can be printed and plotted. In the report, the theoretical foundation of the FEN-Method is discussed, the structure and the application of the programs (preprocessors, analysis-program, postprocessors, supporting programs) are shown, and two examples for calculations are presented. (orig.) [de

  14. DEPDOSE: An interactive, microcomputer based program to calculate doses from exposure to radionuclides deposited on the ground

    International Nuclear Information System (INIS)

    Beres, D.A.; Hull, A.P.

    1991-12-01

    DEPDOSE is an interactive, menu driven, microcomputer based program designed to rapidly calculate committed dose from radionuclides deposited on the ground. The program is designed to require little or no computer expertise on the part of the user. The program consisting of a dose calculation section and a library maintenance section. These selections are available to the user from the main menu. The dose calculation section provides the user with the ability to calculate committed doses, determine the decay time needed to reach a particular dose, cross compare deposition data from separate locations, and approximate a committed dose based on a measured exposure rate. The library maintenance section allows the user to review and update dose modifier data as well as to build and maintain libraries of radionuclide data, dose conversion factors, and default deposition data. The program is structured to provide the user easy access for reviewing data prior to running the calculation. Deposition data can either be entered by the user or imported from other databases. Results can either be displayed on the screen or sent to the printer

  15. CRYOCOL a computer program to calculate the cryogenic distillation of hydrogen isotopes

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1993-02-01

    This report describes the computer model and mathematical method coded into the AECL Research computer program CRYOCOL. The purpose of CRYOCOL is to calculate the separation of hydrogen isotopes by cryogenic distillation. (Author)

  16. SUBDOSA: a computer program for calculating external doses from accidental atmospheric releases of radionuclides

    International Nuclear Information System (INIS)

    Strenge, D.L.; Watson, E.C.; Houston, J.R.

    1975-06-01

    A computer program, SUBDOSA, was developed for calculating external γ and β doses to individuals from the accidental release of radionuclides to the atmosphere. Characteristics of SUBDOSA are: doses from both γ and β radiation are calculated as a function of depth in tissue, summed and reported as skin, eye, gonadal, and total body dose; doses are calculated for releases within each of several release time intervals and nuclide inventories and atmospheric dispersion conditions are considered for each time interval; radioactive decay is considered during the release and/or transit using a chain decay scheme with branching to account for transitions to and from isomeric states; the dose from gamma radiation is calculated using a numerical integration technique to account for the finite size of the plume; and the program computes and lists the normalized air concentrations at ground level as a function of distance from the point of release. (auth)

  17. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  18. FUP1--an unified program for calculating all fast neutron data of fissile nucleus

    International Nuclear Information System (INIS)

    Cai Chonghai; Zuo Yixin

    1990-01-01

    FUP1 is the first edition of an unified program for calculating all the fast neutron data in ENDF/B-4 format for fissile nucleus. Following data are calculated with FUP1 code: the total cross section, elastic scattering cross section, nonelastic cross section, total including up to 40 isolated levels and continuum state inelastic cross sections. In FUP1 the energy region of incident neutron is restricted to 10 Kev to 20 Mev. The advantages of this program are its perfect function, convenient to users and running very fast

  19. Super Phenix. Monitoring of structures subject to irradiation. Neutron dosimetry measurement and calculation program

    International Nuclear Information System (INIS)

    Cabrillat, J.C.; Arnaud, G.; Calamand, D.; Manent, G.; Tavassoli, A.A.

    1984-09-01

    For the Super Phenix reactor, the evolution, versus the irradiation of the mechanical properties of the core diagrid steel is the object of studies and is particularly monitored. The specimens irradiated, now in PHENIX and will be later irradiated in SUPER PHENIX as soon as the first operating cycles. An important dosimetry program coupling calculation and measurement, is parallely carried out. This paper presents the reasons, the definition of the structure, of the development and of materials used in this program of dosimetry, as also the first results of a calculation-measurement comparison [fr

  20. UNIDOSE - a computer program for the calculation of individual and collective doses from airborne radioactive pollutants

    International Nuclear Information System (INIS)

    Karlberg, O.; Schwartz, H.; Forssen, B.-H.; Marklund, J.-E.

    1979-01-01

    UNIDOSE is a program system for calculating the consequences of a radioactive release to the atmosphere. The program is applicable for computation of dispersion in a rnage of 0 - 50 km from the release point. The Gaussion plume model is used for calculating the external dose from activity in the atmosphere, on the ground and the internal dose via inhalation. Radioactive decay, as well as growth and decay of daughter products are accounted for. The influence of dry deposition and wash-out are also considered. It is possible to treat time-dependent release-rates of 1 - 24 hours duration and constant release-rates for up to one year. The program system also contains routines for the calculation of collective dose and health effects. The system operates in a statistical manner. Many weather-situations, based on measured data, can be analysed and statistical properties, such as cumulative frequences, can be calculated. (author)

  1. TRU Waste Sampling Program: Volume I. Waste characterization

    International Nuclear Information System (INIS)

    Clements, T.L. Jr.; Kudera, D.E.

    1985-09-01

    Volume I of the TRU Waste Sampling Program report presents the waste characterization information obtained from sampling and characterizing various aged transuranic waste retrieved from storage at the Idaho National Engineering Laboratory and the Los Alamos National Laboratory. The data contained in this report include the results of gas sampling and gas generation, radiographic examinations, waste visual examination results, and waste compliance with the Waste Isolation Pilot Plant-Waste Acceptance Criteria (WIPP-WAC). A separate report, Volume II, contains data from the gas generation studies

  2. Experimental-calculation technique for Ksub(IC) determination using the samples of decreased dimensions

    International Nuclear Information System (INIS)

    Vinokurov, V.A.; Dymshits, A.V.; Pirusskij, M.V.; Ovsyannikov, B.M.; Kononov, V.V.

    1981-01-01

    A possibility to decrease the size of samples, which is necessary for the reliable determination of fractUre toughness Ksub(1c), is established. The dependences of crack-resistance caracteristics on the sample dimensions are determined experimentally. The static bending tests are made using the 1251 model of ''Instron'' installation with a specially designed device. The samples of the 20KhNMF steel have been tested. It is shown that the Ksub(1c) value, determined for the samples with the largest netto cross section (50x100 rm), is considerably lower than Ksub(1c) values, determined for the samples with the decreased sizes. it is shown that the developed experimental-calculated method of Ksub(1c) determination can be practically used for the samples of the decreased sizes with the introduction of the corresponding amendment coefficient [ru

  3. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  4. Sample problem calculations related to two-phase flow transients in a PWR relief-piping network

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1981-03-01

    Two sample problems related with the fast transients of water/steam flow in the relief line of a PWR pressurizer were calculated with a network-flow analysis computer code STAC (System Transient-Flow Analysis Code). The sample problems were supplied by EPRI and are designed to test computer codes or computational methods to determine whether they have the basic capability to handle the important flow features present in a typical relief line of a PWR pressurizer. It was found necessary to implement into the STAC code a number of additional boundary conditions in order to calculate the sample problems. This includes the dynamics of the fluid interface that is treated as a moving boundary. This report describes the methodologies adopted for handling the newly implemented boundary conditions and the computational results of the two sample problems. In order to demonstrate the accuracies achieved in the STAC code results, analytical solutions are also obtained and used as a basis for comparison

  5. Dosimetry and fluence calculations on french PWR vessels comparisons between experiments and calculations

    International Nuclear Information System (INIS)

    Nimal, J.C.; Bourdet, L.; Guilleret, J.C.; Hedin, F.

    1988-01-01

    Fluence and damage calculations on PWR pressure vessels and irradiation test specimens are presented for two types of reactor: the franco-belgian (reactor CHOOZ) and the french reactors (CPY program). Comparisons with measurements are given for activation foils and fission detectors; most of them are about irradiation test specimen locations; comparisons are made for the Chooz plant on vessel stainless steel samplings and in the reactor pit

  6. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  7. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  8. MOST-7 program for calculation of nonstationary operation modes of the nuclear steam generating plant with WWER

    International Nuclear Information System (INIS)

    Mysenkov, A.I.

    1979-01-01

    The MOST-7 program intended for calculating nonstationary emergency models of a nuclear steam generating plant (NSGP) with a WWER reactor is considered in detail. The program consists of the main MOST-7 subprogram, two main subprograms and 98 subprograms-functions. The MOST-7 program is written in the FORTRAN language and realized at the BESM-6 computer. Program storage capacity in the BESM-6 amounts to 73400 words. Primary information input into the program is carried out by means of information input operator from punched cards and DATA operator. Parameter lists, introduced both from punched cards and by means of DATA operator are tabulated. The procedure of calculational result output into printing and plotting devices is considered. Given is an example of calculating the nonstationary process, related to the loss of power in six main circulating pumps for NSGP with the WWER-440 reactor

  9. Technical basis and evaluation criteria for an air sampling/monitoring program

    International Nuclear Information System (INIS)

    Gregory, D.C.; Bryan, W.L.; Falter, K.G.

    1993-01-01

    Air sampling and monitoring programs at DOE facilities need to be reviewed in light of revised requirements and guidance found in, for example, DOE Order 5480.6 (RadCon Manual). Accordingly, the Oak Ridge National Laboratory (ORNL) air monitoring program is being revised and placed on a sound technical basis. A draft technical basis document has been written to establish placement criteria for instruments and to guide the ''retrospective sampling or real-time monitoring'' decision. Facility evaluations are being used to document air sampling/monitoring needs, and instruments are being evaluated in light of these needs. The steps used to develop this program and the technical basis for instrument placement are described

  10. REITP3-Hazard evaluation program for heat release based on thermochemical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Akutsu, Yoshiaki.; Tamura, Masamitsu. [The University of Tokyo, Tokyo (Japan). School of Engineering; Kawakatsu, Yuichi. [Oji Paper Corp., Tokyo (Japan); Wada, Yuji. [National Institute for Resources and Environment, Tsukuba (Japan); Yoshida, Tadao. [Hosei University, Tokyo (Japan). College of Engineering

    1999-06-30

    REITP3-A hazard evaluation program for heat release besed on thermochemical calculation has been developed by modifying REITP2 (Revised Estimation of Incompatibility from Thermochemical Properties{sup 2)}. The main modifications are as follows. (1) Reactants are retrieved from the database by chemical formula. (2) As products are listed in an external file, the addition of products and change in order of production can be easily conducted. (3) Part of the program has been changed by considering its use on a personal computer or workstation. These modifications will promote the usefulness of the program for energy hazard evaluation. (author)

  11. TRAFIC, a computer program for calculating the release of metallic fission products from an HTGR core

    International Nuclear Information System (INIS)

    Smith, P.D.

    1978-02-01

    A special purpose computer program, TRAFIC, is presented for calculating the release of metallic fission products from an HTGR core. The program is based upon Fick's law of diffusion for radioactive species. One-dimensional transient diffusion calculations are performed for the coated fuel particles and for the structural graphite web. A quasi steady-state calculation is performed for the fuel rod matrix material. The model accounts for nonlinear adsorption behavior in the fuel rod gap and on the coolant hole boundary. The TRAFIC program is designed to operate in a core survey mode; that is, it performs many repetitive calculations for a large number of spatial locations in the core. This is necessary in order to obtain an accurate volume integrated release. For this reason the program has been designed with calculational efficiency as one of its main objectives. A highly efficient numerical method is used in the solution. The method makes use of the Duhamel superposition principle to eliminate interior spatial solutions from consideration. Linear response functions relating the concentrations and mass fluxes on the boundaries of a homogeneous region are derived. Multiple regions are numerically coupled through interface conditions. Algebraic elimination is used to reduce the equations as far as possible. The problem reduces to two nonlinear equations in two unknowns, which are solved using a Newton Raphson technique

  12. Development of Calculation Module for Intake Retention Functions based on Occupational Intakes of Radionuclides

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki [Hanyang Univ., Seoul (Korea, Republic of); Lee, Jong-Il; Kim, Jang-Lyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language.

  13. Development of Calculation Module for Intake Retention Functions based on Occupational Intakes of Radionuclides

    International Nuclear Information System (INIS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki; Lee, Jong-Il; Kim, Jang-Lyul

    2014-01-01

    In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language

  14. micrOMEGAs 2.0.7: a program to calculate the relic density of dark matter in a generic model

    Science.gov (United States)

    Bélanger, G.; Boudjema, F.; Pukhov, A.; Semenov, A.

    2007-12-01

    micrOMEGAs2.0.7 is a code which calculates the relic density of a stable massive particle in an arbitrary model. The underlying assumption is that there is a conservation law like R-parity in supersymmetry which guarantees the stability of the lightest odd particle. The new physics model must be incorporated in the notation of CalcHEP, a package for the automatic generation of squared matrix elements. Once this is done, all annihilation and coannihilation channels are included automatically in any model. Cross-sections at v=0, relevant for indirect detection of dark matter, are also computed automatically. The package includes three sample models: the minimal supersymmetric standard model (MSSM), the MSSM with complex phases and the NMSSM. Extension to other models, including non supersymmetric models, is described. Program summaryTitle of program:micrOMEGAs2.0.7 Catalogue identifier:ADQR_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v2_1.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:216 529 No. of bytes in distributed program, including test data, etc.:1 848 816 Distribution format:tar.gz Programming language used:C and Fortran Computer:PC, Alpha, Mac, Sun Operating system:UNIX (Linux, OSF1, SunOS, Darwin, Cygwin) RAM:17 MB depending on the number of processes required Classification:1.9, 11.6 Catalogue identifier of previous version:ADQR_v2_0 Journal version of previous version:Comput. Phys. Comm. 176 (2007) 367 Does the new version supersede the previous version?:Yes Nature of problem:Calculation of the relic density of the lightest stable particle in a generic new model of particle physics. Solution method:In numerically solving the evolution equation for the density of dark matter, relativistic formulae for the thermal average are used. All tree

  15. EPCARD (European Program Package for the Calculation of Aviation Route Doses). User's manual for version 3.2

    International Nuclear Information System (INIS)

    Schraube, H.; Leuthold, G.P.; Schraube, G.; Heinrich, W.; Roesler, S.; Mares, V.

    2002-01-01

    The GSF-National Research Center has developed the computer program EPCARD (European program package for the calculation of aviation route doses) jointly with scientists from Siegen University. With the program it is possible calculate the radiation dose obtained by individuals along any aviation route at flight altitudes between 5000 m and 25000 m, both in terms of ''ambient dose equivalent'' and ''effective dose''. Dose rates at any point in the atmosphere may be calculated for comparison with verification experiments, as well as simulated instrument readings, if the response characteristics of the instruments are known. The program fulfills the requirements of the European Council Directive 96/29/EURATOM and of the subsequent European national regulations. This report contains essentially all information, which is necessary to run EPCARDv3.2 from a standard PC. The program structure is depicted and the file structure described in detail, which permits to calculate the large number of data sets for the daily record keeping of airline crews and other frequently flying persons. Additionally, some information is given on the basic physical data, which is available from referenced publications. (orig.)

  16. TRU waste-sampling program

    International Nuclear Information System (INIS)

    Warren, J.L.; Zerwekh, A.

    1985-08-01

    As part of a TRU waste-sampling program, Los Alamos National Laboratory retrieved and examined 44 drums of 238 Pu- and 239 Pu-contaminated waste. The drums ranged in age from 8 months to 9 years. The majority of drums were tested for pressure, and gas samples withdrawn from the drums were analyzed by a mass spectrometer. Real-time radiography and visual examination were used to determine both void volumes and waste content. Drum walls were measured for deterioration, and selected drum contents were reassayed for comparison with original assays and WIPP criteria. Each drum tested at atmospheric pressure. Mass spectrometry revealed no problem with 239 Pu-contaminated waste, but three 8-month-old drums of 238 Pu-contaminated waste contained a potentially hazardous gas mixture. Void volumes fell within the 81 to 97% range. Measurements of drum walls showed no significant corrosion or deterioration. All reassayed contents were within WIPP waste acceptance criteria. Five of the drums opened and examined (15%) could not be certified as packaged. Three contained free liquids, one had corrosive materials, and one had too much unstabilized particulate. Eleven drums had the wrong (or not the most appropriate) waste code. In many cases, disposal volumes had been inefficiently used. 2 refs., 23 figs., 7 tabs

  17. Guidance for establishment and implementation of a national sample management program in support of EM environmental sampling and analysis activities

    International Nuclear Information System (INIS)

    1994-01-01

    The role of the National Sample Management Program (NSMP) proposed by the Department of Energy's Office of Environmental Management (EM) is to be a resource for EM programs and for local Field Sample Management Programs (FSMPs). It will be a source of information on sample analysis and data collection within the DOE complex. Therefore the NSMP's primary role is to coordinate and function as a central repository for information collected from the FSMPs. An additional role of the NSMP is to monitor trends in data collected from the FSMPs over time and across sites and laboratories. Tracking these trends will allow identification of potential problems in the sampling and analysis process

  18. PYFLOW_2.0: a computer program for calculating flow properties and impact parameters of past dilute pyroclastic density currents based on field data

    Science.gov (United States)

    Dioguardi, Fabio; Mele, Daniela

    2018-03-01

    This paper presents PYFLOW_2.0, a hazard tool for the calculation of the impact parameters of dilute pyroclastic density currents (DPDCs). DPDCs represent the dilute turbulent type of gravity flows that occur during explosive volcanic eruptions; their hazard is the result of their mobility and the capability to laterally impact buildings and infrastructures and to transport variable amounts of volcanic ash along the path. Starting from data coming from the analysis of deposits formed by DPDCs, PYFLOW_2.0 calculates the flow properties (e.g., velocity, bulk density, thickness) and impact parameters (dynamic pressure, deposition time) at the location of the sampled outcrop. Given the inherent uncertainties related to sampling, laboratory analyses, and modeling assumptions, the program provides ranges of variations and probability density functions of the impact parameters rather than single specific values; from these functions, the user can interrogate the program to obtain the value of the computed impact parameter at any specified exceedance probability. In this paper, the sedimentological models implemented in PYFLOW_2.0 are presented, program functionalities are briefly introduced, and two application examples are discussed so as to show the capabilities of the software in quantifying the impact of the analyzed DPDCs in terms of dynamic pressure, volcanic ash concentration, and residence time in the atmosphere. The software and user's manual are made available as a downloadable electronic supplement.

  19. Analysis of monazite samples

    International Nuclear Information System (INIS)

    Kartiwa Sumadi; Yayah Rohayati

    1996-01-01

    The 'monazit' analytical program has been set up for routine work of Rare Earth Elements analysis in the monazite and xenotime minerals samples. Total relative error of the analysis is very low, less than 2.50%, and the reproducibility of counting statistic and stability of the instrument were very excellent. The precision and accuracy of the analytical program are very good with the maximum percentage relative are 5.22% and 1.61%, respectively. The mineral compositions of the 30 monazite samples have been also calculated using their chemical constituents, and the results were compared to the grain counting microscopic analysis

  20. A computer program for calculation of the fuel cycle in pressurized water reactors

    International Nuclear Information System (INIS)

    Solanilla, R.

    1976-01-01

    The purpose of the FUCEFURE program is two-fold: first, it is designed to solve the problem of nuclear fuel cycle cost in one pressurized light water reactor calculation. The code was developed primarily for comparative and sensitivity studies. The program contains simple correlations between exposure and available depletion data used to predict the uranium and plutonium content of the fuel as a function of the fuel initial enrichment. Second, it has been devised to evaluate the nuclear fuel demand associated with an expanding nuclear power system. Evaluation can be carried out at any time and stage in the fuel cycle. The program can calculate the natural uranium and separate work requirements of any final and tails enrichment. It also can determine the nuclear power share of each reactor in the system when a decision has been made about the long-term nuclear power installations to be used and the types of PWR and fast breeder reactor characteristics to be involved in them. (author)

  1. Model calculations as one means of satisfying the neutron cross-section requirements of the CTR program

    International Nuclear Information System (INIS)

    Gardner, D.G.

    1975-01-01

    A large amount of cross section and spectral information for neutron-induced reactions will be required for the CTR design program. To undertake to provide the required data through a purely experimental measurement program alone may not be the most efficient way of attacking the problem. It is suggested that a preliminary theoretical calculation be made of all relevant reactions on the dozen or so elements that now seem to comprise the inventory of possible construction materials to find out which are actually important, and over what energy ranges they are important. A number of computer codes for calculating cross sections for neutron induced reactions have been evaluated and extended. These will be described and examples will be given of various types of calculations of interest to the CTR program. (U.S.)

  2. Importance sampling and histogrammic representations of reactivity functions and product distributions in Monte Carlo quasiclassical trajectory calculations

    International Nuclear Information System (INIS)

    Faist, M.B.; Muckerman, J.T.; Schubert, F.E.

    1978-01-01

    The application of importance sampling as a variance reduction technique in Monte Carlo quasiclassical trajectory calculations is discussed. Two measures are proposed which quantify the quality of the importance sampling used, and indicate whether further improvements may be obtained by some other choice of importance sampling function. A general procedure for constructing standardized histogrammic representations of differential functions which integrate to the appropriate integral value obtained from a trajectory calculation is presented. Two criteria for ''optimum'' binning of these histogrammic representations of differential functions are suggested. These are (1) that each bin makes an equal contribution to the integral value, and (2) each bin has the same relative error. Numerical examples illustrating these sampling and binning concepts are provided

  3. A model for steady-state and transient determination of subcooled boiling for calculations coupling a thermohydraulic and a neutron physics calculation program for reactor core calculation

    International Nuclear Information System (INIS)

    Mueller, R.G.

    1987-06-01

    Due to the strong influence of vapour bubbles on the nuclear chain reaction, an exact calculation of neutron physics and thermal hydraulics in light water reactors requires consideration of subcooled boiling. To this purpose, in the present study a dynamic model is derived from the time-dependent conservation equations. It contains new methods for the time-dependent determination of evaporation and condensation heat flow and for the heat transfer coefficient in subcooled boiling. Furthermore, it enables the complete two-phase flow region to be treated in a consistent manner. The calculation model was verified using measured data of experiments covering a wide range of thermodynamic boundary conditions. In all cases very good agreement was reached. The results from the coupling of the new calculation model with a neutron kinetics program proved its suitability for the steady-state and transient calculation of reactor cores. (orig.) [de

  4. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.

  5. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    International Nuclear Information System (INIS)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach

  6. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  7. Development of a program for calculation of second dose and securities in brachytherapy high dose rate

    International Nuclear Information System (INIS)

    Esteve Sanchez, S.; Martinez Albaladejo, M.; Garcia Fuentes, J. D.; Bejar Navarro, M. J.; Capuz Suarez, B.; Moris de Pablos, R.; Colmenares Fernandez, R.

    2015-01-01

    We assessed the reliability of the program with 80 patients in the usual points of prescription of each pathology. The average error of the calculation points is less than 0.3% in 95% of cases, finding the major differences in the axes of the applicators (maximum error -0.798%). The program has proved effective previously testing him with erroneous dosimetry. Thanks to the implementation of this program is achieved by the calculation of the dose and part of the process of quality assurance program in a few minutes, highlighting the case of HDR prostate due to having a limited time. Having separate data sheet allows each institution to its protocols modify parameters. (Author)

  8. Programs OPTMAN and SHEMMAN Version 6 (1999) - Coupled-Channels optical model and collective nuclear structure calculation -

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jong Hwa; Lee, Jeong Yeon; Lee, Young Ouk; Sukhovitski, Efrem Sh [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-01-01

    Programs SHEMMAN and OPTMAN (Version 6) have been developed for determinations of nuclear Hamiltonian parameters and for optical model calculations, respectively. The optical model calculations by OPTMAN with coupling schemes built on wave functions functions of non-axial soft-rotator are self-consistent, since the parameters of the nuclear Hamiltonian are determined by adjusting the energies of collective levels to experimental values with SHEMMAN prior to the optical model calculation. The programs have been installed at Nuclear Data Evaluation Laboratory of KAERI. This report is intended as a brief manual of these codes. 43 refs., 9 figs., 1 tabs. (Author)

  9. Structural system reliability calculation using a probabilistic fault tree analysis method

    Science.gov (United States)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  10. A FORTRAN program for an IBM PC compatible computer for calculating kinematical electron diffraction patterns

    International Nuclear Information System (INIS)

    Skjerpe, P.

    1989-01-01

    This report describes a computer program which is useful in transmission electron microscopy. The program is written in FORTRAN and calculates kinematical electron diffraction patterns in any zone axis from a given crystal structure. Quite large unit cells, containing up to 2250 atoms, can be handled by the program. The program runs on both the Helcules graphic card and the standard IBM CGA card

  11. KAPSIES: A program for the calculation of multi-step direct reaction cross sections

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1994-09-01

    We present a program for the calculation of continuum cross sections, sepctra, angular distributions and analyzing powers according to various quantum-mechanical theories for statistical multi-step direct nuclear reactions. (orig.)

  12. A program for calculating and plotting soft x-ray optical interaction coefficients for molecules

    International Nuclear Information System (INIS)

    Thomas, M.M.; Davis, J.C.; Jacobsen, C.J.; Perera, R.C.C.

    1989-08-01

    Comprehensive tables for atomic scattering factor components, f1 and f2, were compiled by Henke et al. for the extended photon region 50 - 10000 eV. Accurate calculations of optical interaction coefficients for absorption, reflection and scattering by material systems (e.g. filters, multi-layers, etc...), which have widespread application, can be based simply upon the atomic scattering factors for the elements comprising the material, except near the absorption threshold energies. These calculations based upon the weighted sum of f1 and f2 for each atomic species present can be very tedious if done by hand. This led us to develop a user friendly program to perform these calculations on an IBM PC or compatible computer. By entering the chemical formula, density and thickness of up to six molecules, values of the f1, f2, mass absorption transmission efficiencies, attenuation lengths, mirror reflectivities and complex indices of refraction can be calculated and plotted as a function of energy or wavelength. This program will be available distribution. 7 refs., 1 fig

  13. Tegen - an onedimensional program to calculate a thermoelectric generator

    International Nuclear Information System (INIS)

    Rosa, M.A.P.; Ferreira, P.A.; Castro Lobo, P.D. de.

    1990-01-01

    A computer program for the solution of the one-dimensional, steady-state temperature equation in the arms of a thermoelectric generator. The discretized equations obtained through a finite difference scheme are solved by Gaussian Elimination. Due to nonlinearities caused by the temperature dependence of the coefficients of such equations, an iterative procedure is used to obtain the temperature distribution in the arms. Such distributions are used in the calculation of the efficiency, electric power, load voltage and other relevant parameters for the design of a thermoelectric generator. (author)

  14. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  15. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  16. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  17. Absorption and enhancement corrections using XRF analysis of some chemical samples

    International Nuclear Information System (INIS)

    Falih, Arwa Gaddal

    1996-06-01

    In this work samples containing Cr, Fe and Ni salts invarying ratios were prepared so as to represent approximately the concentrations of these elements in naturally occurring ore samples. These samples were then analyzed by EDXRF spectrometer system and the inter element effects (absorption and enhancement) were evaluated by means of two method: by using AXIL-QXAS software to calculate the effects and by the emission-transmission method to experimentally determine the same effects. The results obtained were compared and a discrepancy in the absorption results was observed. The discrepancy was attributed to the fact that the absorption in the two methods was calculated in different manners, i.e. in the emission-transmission method the absorption factor was calculated by adding different absorption terms by what is known as the additive law, but in the software program it was calculated from the scattered peaks method which does not obey this law. It was concluded that the program should be modified by inserting the emission-transmission method in the software program to calculate the absorption. Quality assurance of the data was performed though the analysis of the standard alloys obtained from the International Atomic Energy Agency (IAEA). (Author)

  18. Calculation of the average radiological detriment of two samples from a breast screening programme

    International Nuclear Information System (INIS)

    Ramos, M.; Sanchez, A.M.; Verdu, G.; Villaescusa, J.I.; Salas, M.D.; Cuevas, M.D.

    2002-01-01

    In 1992 started in the Comunidad Valenciana the Breast Cancer Screening Programme. The programme is oriented to asymptomatic women between 45 and 65 years old, with two mammograms in each breast for the first time that participate and a simple one in later interventions. Between November of 2000 and March of 2001 was extracted a first sample of 100 woman records for all units of the programme. The data extracted in each sample were the kV-voltage, the X-ray tube load and the breast thickness and age of the woman exposed, used directly in dose and detriment calculation. By means of MCNP-4B code and according to the European Protocol for the quality control of the physical and technical aspects of mammography screening, the average total and glandular doses were calculated, and later compared

  19. 'PRIZE': A program for calculating collision probabilities in R-Z geometry

    Energy Technology Data Exchange (ETDEWEB)

    Pitcher, H.H.W. [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1964-10-15

    PRIZE is an IBM7090 program which computes collision probabilities for systems with axial symmetry and outputs them on cards in suitable format for the PIP1 program. Its method of working, data requirements, output, running time and accuracy are described. The program has been used to compute non-escape (self-collision) probabilities of finite circular cylinders, and a table is given by which non-escape probabilities of slabs, finite and infinite circular cylinders, infinite square cylinders, cubes, spheres and hemispheres may quickly be calculated to 1/2% or better. (author)

  20. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Application of the REMIX thermal mixing calculation program for the Loviisa reactor

    International Nuclear Information System (INIS)

    Kokkonen, I.; Tuomisto, H.

    1987-08-01

    The REMIX computer program has been validated to be used in the pressurized thermal shock study of the Loviisa reactor pressure vessel. The program has been verified against the data from the thermal and fluid mixing experiments. These experiments have been carried out in Imatran voima Oy to study thermal mixing of the high-pressure safety injection water in the Loviisa VVER-440 type pressurized water reactor. The verified REMIX-versions were applied to reactor calculations in the probabilistic pressurized thermal shock study of the Loviisa Plant

  2. AFG-MONSU. A program for calculating axial heterogeneities in cylindrical pin cells

    International Nuclear Information System (INIS)

    Neltrup, H.; Kirkegaard, P.

    1978-08-01

    The AGF-MONSU program complex is designed to calculate the flux in cylindrical fuel pin cells into which heterogeneities are introduced in a regular array. The theory - integral transport theory combined with Monte Carlo by help of a superposition principle - is described in some detail. Detailed derivation of the superposition principle as well as the formulas used in the DIT (Discrete Integral Transport) method is given in the appendices along with a description of the input structure of the AFG-MONSU program complex. (author)

  3. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  4. SAMPLE RESULTS FROM THE INTEGRATED SALT DISPOSITION PROGRAM MACROBATCH 4 TANK 21H QUALIFICATION SAMPLES

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T.; Fink, S.

    2011-06-22

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H to qualify them for use in the Integrated Salt Disposition Program (ISDP) Batch 4 processing. All sample results agree with expectations based on prior analyses where available. No issues with the projected Salt Batch 4 strategy are identified. This revision includes additional data points that were not available in the original issue of the document, such as additional plutonium results, the results of the monosodium titanate (MST) sorption test and the extraction, scrub strip (ESS) test. This report covers the revision to the Tank 21H qualification sample results for Macrobatch (Salt Batch) 4 of the Integrated Salt Disposition Program (ISDP). A previous document covers initial characterization which includes results for a number of non-radiological analytes. These results were used to perform aluminum solubility modeling to determine the hydroxide needs for Salt Batch 4 to prevent the precipitation of solids. Sodium hydroxide was then added to Tank 21 and additional samples were pulled for the analyses discussed in this report. This work was specified by Task Technical Request and by Task Technical and Quality Assurance Plan (TTQAP).

  5. FLOWNET: A Computer Program for Calculating Secondary Flow Conditions in a Network of Turbomachinery

    Science.gov (United States)

    Rose, J. R.

    1978-01-01

    The program requires the network parameters, the flow component parameters, the reservoir conditions, and the gas properties as input. It will then calculate all unknown pressures and the mass flow rate in each flow component in the network. The program can treat networks containing up to fifty flow components and twenty-five unknown network pressures. The types of flow components that can be treated are face seals, narrow slots, and pipes. The program is written in both structured FORTRAN (SFTRAN) and FORTRAN 4. The program must be run in an interactive (conversational) mode.

  6. Automated atomic absorption spectrophotometer, utilizing a programmable desk calculator

    International Nuclear Information System (INIS)

    Futrell, T.L.; Morrow, R.W.

    1977-01-01

    A commercial, double-beam atomic absorption spectrophotometer has been interfaced with a sample changer and a Hewlett-Packard 9810A calculator to yield a completely automated analysis system. The interface electronics can be easily constructed and should be adaptable to any double-beam atomic absorption instrument. The calculator is easily programmed and can be used for general laboratory purposes when not operating the instrument. The automated system has been shown to perform very satisfactorily when operated unattended to analyze a large number of samples. Performance statistics agree well with a manually operated instrument

  7. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  8. Easy-to-use application programs for decay heat and delayed neutron calculations on personal computers

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, Kazuhiro [Nagoya Univ. (Japan)

    1998-03-01

    Application programs for personal computers are developed to calculate the decay heat power and delayed neutron activity from fission products. The main programs can be used in any computers from personal computers to main frames because their sources are written in Fortran. These programs have user friendly interfaces to be used easily not only for research activities but also for educational purposes. (author)

  9. Hanford high level waste: Sample Exchange/Evaluation (SEE) Program

    International Nuclear Information System (INIS)

    King, A.G.

    1994-08-01

    The Pacific Northwest Laboratory (PNL)/Analytical Chemistry Laboratory (ACL) and the Westinghouse Hanford Company (WHC)/Process Analytical Laboratory (PAL) provide analytical support services to various environmental restoration and waste management projects/programs at Hanford. In response to a US Department of Energy -- Richland Field Office (DOE-RL) audit, which questioned the comparability of analytical methods employed at each laboratory, the Sample Exchange/Exchange (SEE) program was initiated. The SEE Program is a selfassessment program designed to compare analytical methods of the PAL and ACL laboratories using sitespecific waste material. The SEE program is managed by a collaborative, the Quality Assurance Triad (Triad). Triad membership is made up of representatives from the WHC/PAL, PNL/ACL, and WHC Hanford Analytical Services Management (HASM) organizations. The Triad works together to design/evaluate/implement each phase of the SEE Program

  10. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Master schedule for CY-1984 Hanford environmental surveillance routine sampling program

    International Nuclear Information System (INIS)

    Blumer, P.J.; Price, K.R.; Eddy, P.A.; Carlile, J.M.V.

    1983-12-01

    This report provides the current schedule of data collection for the routine Hanford environmental surveillance and ground-water Monitoring Programs at the Hanford Site. The purpose is to evaluate and report the levels of radioactive and nonradioactive pollutants in the Hanford environs. The routine sampling schedule provided herein does not include samples that are planned to be collected during FY-1984 in support of special studies, special contractor support programs, or for quality control purposes

  12. 'BLOC' program for elasto-plastic calculation of fissured media

    International Nuclear Information System (INIS)

    Pouyet, P.; Picaut, J.; Costaz, J.L.; Dulac, J.

    1983-01-01

    The method described is used to test failure mechanisms and to calculate the corresponding ultimate loads. The main advantages it offers are simple modelling, the possibility of representing all the prestressing and reinforcement steels simply and correctly, and fewer degrees of freedom, hence lower cost (the program can be run on a microcomputer). However, the model is sensitive to the arrangement of the interface elements, presupposing a given failure mechanism. This normally means testing several different models with different kinematically possible failure patterns. But the ease of modelling and low costs are ideal for this type of approach. (orig./RW)

  13. KOP program for calculating cross sections of neutron and charged particle interactions with atomic nuclei using the optical model

    International Nuclear Information System (INIS)

    Grudzevich, O.D.; Zelenetskij, A.V.; Pashchenko, A.B.

    1986-01-01

    The last version of the KOP program for calculating cross sections of neutron and charged particle interaction with atomic nuclei within the scope of the optical model is described. The structure and program organization, library of total parameters of the optical potential, program identificators and peculiarities of its operation, input of source data and output of calculational results for printing are described in detail. The KOP program is described in Fortran- and adapted for EC-1033 computer

  14. Evaluation the total exposure of soil sample in Adaya site and the obtain risk assessments for the worker by Res Rad code program

    International Nuclear Information System (INIS)

    Mahadi, A. M.; Khadim, A. A. N.; Ibrahim, Z. H.; Ali, S. A.

    2012-12-01

    The present study aims to evaluation the total exposure to the worker in Adaya site risk assessment by using Res Rad code program. The study including 5 areas soil sample calculate in the site and analysis it by High Pure Germaniums (Hg) system made (CANBERRA) company. The soil sample simulation by (Res Rad) code program by inter the radioactive isotope concentration and the specification of the contamination zone area, depth and the cover depth of it. The total exposure of same sample was about 9 mSv/year and the (Heast 2001 Morbidity, FGR13 Morbidity) about 2.045 state every 100 worker in the year. There are simple different between Heast 2001 Morbidity and FGR13 Morbidity according to the Dose Conversion Factor (DCF) use it. The (FGR13 Morbidity) about 2.041 state every 100 worker in the year. (Author)

  15. A program for calculating group constants on the basis of libraries of evaluated neutron data

    International Nuclear Information System (INIS)

    Sinitsa, V.V.

    1987-01-01

    The GRUKON program is designed for processing libraries of evaluated neutron data into group and fine-group (having some 300 groups) microscopic constants. In structure it is a package of applications programs with three basic components: a monitor, a command language and a library of functional modules. The first operative version of the package was restricted to obtaining mid-group non-block cross-sections from evaluated neutron data libraries in the ENDF/B format. This was then used to process other libraries. In the next two versions, cross-section table conversion modules and self-shielding factor calculation modules, respectively, were added to the functions already in the package. Currently, a fourth version of the GRUKON applications program package, for calculation of sub-group parameters, is under preparation. (author)

  16. Experience with a routine fecal sampling program for plutonium workers

    International Nuclear Information System (INIS)

    Bihl, D.E.; Buschbom, R.L.; Sula, M.J.

    1993-01-01

    A quarterly fecal sampling program was conducted at the U. S. Department of Energy's Hanford site for congruent to 100 workers at risk for an intake of plutonium oxide and other forms of plutonium. To our surprise, we discovered that essentially all of the workers were excreting detectable activities of plutonium. Further investigation showed that the source was frequent, intermittent intakes at levels below detectability by normal workplace monitoring, indicating the extraordinary sensitivity of fecal sampling. However, the experience of this study also indicated that the increased sensitivity of routine fecal sampling relative to more common bioassay methods is offset by many problems. These include poor worker cooperation; difficulty in distinguishing low-level chronic intakes from a more significant, acute intake; difficulty in eliminating interference from ingested plutonium; and difficulty in interpreting what a single void means in terms of 24-h excretion. Recommendations for a routine fecal program include providing good communication to workers and management about reasons and logistics of fecal sampling prior to starting, using annual (instead of quarterly) fecal sampling for class Y plutonium, collecting samples after workers have been away from plutonium exposure for a least 3 d, and giving serious consideration to improving urinalysis sensitivity rather than going to routine fecal sampling

  17. Quanty4RIXS: a program for crystal field multiplet calculations of RIXS and RIXS-MCD spectra using Quanty.

    Science.gov (United States)

    Zimmermann, Patric; Green, Robert J; Haverkort, Maurits W; de Groot, Frank M F

    2018-05-01

    Some initial instructions for the Quanty4RIXS program written in MATLAB ® are provided. The program assists in the calculation of 1s 2p RIXS and 1s 2p RIXS-MCD spectra using Quanty. Furthermore, 1s XAS and 2p 3d RIXS calculations in different symmetries can also be performed. It includes the Hartree-Fock values for the Slater integrals and spin-orbit interactions for several 3d transition metal ions that are required to create the .lua scripts containing all necessary parameters and quantum mechanical definitions for the calculations. The program can be used free of charge and is designed to allow for further adjustments of the scripts. open access.

  18. Experimental control of calculation model of scale factor during fracture of circular samples with cracks

    International Nuclear Information System (INIS)

    Gnyp, I.P.; Ganulich, B.K.; Pokhmurskij, V.I.

    1982-01-01

    Reliable methods of estimation of cracking resistance of low-strength plastic materials using the notched samples acceptable for laboratory tests are analysed. Experimental data on the fracture of round notched samples for a number of steels are given. A perfect comparability of calculational and experimental data confirms the legitimacy of the proposed scheme of estimation of the scale factor effect. The necessity of taking into account the strain hardening coefficient at the choice of a sample size for determining the stress intensity factor is pointed out

  19. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N. [Sandia National Labs., Albuquerque, NM (United States). Mechanical and Thermal Environments Dept.

    1998-03-01

    LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

  20. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  1. A new program for calculating matrix elements of one-particle operators in jj-coupling

    International Nuclear Information System (INIS)

    Pyper, N.C.; Grant, I.P.; Beatham, N.

    1978-01-01

    The aim of this paper is to calculate the matrix elements of one-particle tensor operators occurring in atomic and nuclear theory between configuration state functions representing states containing any number of open shells in jj-coupling. The program calculates the angular part of these matrix elements. The program is essentially a new version of RDMEJJ, written by J.J. Chang. The aims of this version are to eliminate inconsistencies from RDMEJJ, to modify its input requirements for consistency with MCP75, and to modify its output so that it can be stored in a discfile for access by other compatible programs. The program assumes that the configurational states are built from a common orthonormal set of basis orbitals. The number of electrons in a shell having j>=9/2 is restricted to be not greater than 2 by the available CFP routines . The present version allows up to 40 orbitals and 50 configurational states with <=10 open shells; these numbers can be changed by recompiling with modified COMMON/DIMENSION statements. The user should ensure that the CPC library subprograms AAGD, ACRI incorporate all current updates and have been converted to use double precision floating point arithmetic. (Auth.)

  2. TMI-2 accident evaluation program sample acquisition and examination plan. Executive summary

    International Nuclear Information System (INIS)

    Russell, M.L.; McCardell, R.K.; Broughton, J.M.

    1985-12-01

    The purpose of the TMI-2 Accident Evaluation Program Sample Acquisition and Examination (TMI-2 AEP SA and E) program is to develop and implement a test and inspection plan that completes the current-condition characterization of (a) the TMI-2 equipment that may have been damaged by the core damage events and (b) the TMI-2 core fission product inventory. The characterization program includes both sample acquisitions and examinations and in-situ measurements. Fission product characterization involves locating the fission products as well as determining their chemical form and determining material association

  3. PERL-2 and LAVR-2 programs for Monte Carlo calculation of reactivity disturbances with trajectory correlation using random numbers

    International Nuclear Information System (INIS)

    Kamaeva, O.B.; Polevoj, V.B.

    1983-01-01

    Realization of BESM-6 computer of a technique is described for calculating a wide class of reactivity disturbances by plotting trajectories in undisturbed and disturbed systems using one sequence of random numbers. The technique was realized on the base of earlier created programs of calculation of widespreed (PERL) and local (LAVR) reactivity disturbances. The efficiency of the technique and programs is demonstrated by calculation of change of effective neutron-multiplication factor when absorber is substituted for fuel element in a BFS-40 critical assembly and by calculation of control drum characteristics

  4. A program for calculating load coefficient matrices utilizing the force summation method, L218 (LOADS). Volume 1: Engineering and usage

    Science.gov (United States)

    Miller, R. D.; Anderson, L. R.

    1979-01-01

    The LOADS program L218, a digital computer program that calculates dynamic load coefficient matrices utilizing the force summation method, is described. The load equations are derived for a flight vehicle in straight and level flight and excited by gusts and/or control motions. In addition, sensor equations are calculated for use with an active control system. The load coefficient matrices are calculated for the following types of loads: translational and rotational accelerations, velocities, and displacements; panel aerodynamic forces; net panel forces; shears and moments. Program usage and a brief description of the analysis used are presented. A description of the design and structure of the program to aid those who will maintain and/or modify the program in the future is included.

  5. Procedure for obtaining neutron diffusion coefficients from neutron transport Monte Carlo calculations (AWBA Development Program)

    International Nuclear Information System (INIS)

    Gast, R.C.

    1981-08-01

    A procedure for defining diffusion coefficients from Monte Carlo calculations that results in suitable ones for use in neutron diffusion theory calculations is not readily obtained. This study provides a survey of the methods used to define diffusion coefficients from deterministic calculations and provides a discussion as to why such traditional methods cannot be used in Monte Carlo. This study further provides the empirical procedure used for defining diffusion coefficients from the RCP01 Monte Carlo program

  6. RepoSTAR. A Code package for control and evaluation of statistical calculations with the program package RepoTREND; RepoSTAR. Ein Codepaket zur Steuerung und Auswertung statistischer Rechenlaeufe mit dem Programmpaket RepoTREND

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Dirk-Alexander

    2016-05-15

    The program package RepoTREND for the integrated long terms safety analysis of final repositories allows besides deterministic studies of defined problems also statistical or probabilistic analyses. Probabilistic uncertainty and sensitivity analyses are realized in the program package repoTREND by a specific statistic frame called RepoSTAR. The report covers the following issues: concept, sampling and data supply of single simulations, evaluation of statistical calculations with the program RepoSUN.

  7. Hauser-Feshbach cross-section calculations for elastic and inelastic scattering of alpha particles-program CORA

    International Nuclear Information System (INIS)

    Hartman, A.; Siemaszko, M.; Zipper, W.

    1975-01-01

    The program CORA was prepared on the basis of Hauser and Feshbach compound reaction formalism. It allows the differential cross-section distributions for the elastic and inelastic scattering of alpha particles (via compound nucleus state) to be calculated. The transmission coefficients are calculated on the basis of a four parameter optical model. The search procedure is also included. (author)

  8. Safety analysis report for packaging (onsite) transuranic performance demonstration program sample packaging

    International Nuclear Information System (INIS)

    Mccoy, J.C.

    1997-01-01

    The Transuranic Performance Demonstration Program (TPDP) sample packaging is used to transport highway route controlled quantities of weapons grade (WG) plutonium samples from the Plutonium Finishing Plant (PFP) to the Waste Receiving and Processing (WRAP) facility and back. The purpose of these shipments is to test the nondestructive assay equipment in the WRAP facility as part of the Nondestructive Waste Assay PDP. The PDP is part of the U. S. Department of Energy (DOE) National TRU Program managed by the U. S. Department of Energy, Carlsbad Area Office, Carlsbad, New Mexico. Details of this program are found in CAO-94-1045, Performance Demonstration Program Plan for Nondestructive Assay for the TRU Waste Characterization Program (CAO 1994); INEL-96/0129, Design of Benign Matrix Drums for the Non-Destructive Assay Performance Demonstration Program for the National TRU Program (INEL 1996a); and INEL-96/0245, Design of Phase 1 Radioactive Working Reference Materials for the Nondestructive Assay Performance Demonstration Program for the National TRU Program (INEL 1996b). Other program documentation is maintained by the national TRU program and each DOE site participating in the program. This safety analysis report for packaging (SARP) provides the analyses and evaluations necessary to demonstrate that the TRU PDP sample packaging meets the onsite transportation safety requirements of WHC-CM-2-14, Hazardous Material Packaging and Shipping, for an onsite Transportation Hazard Indicator (THI) 2 packaging. This SARP, however, does not include evaluation of any operations within the PFP or WRAP facilities, including handling, maintenance, storage, or operating requirements, except as they apply directly to transportation between the gate of PFP and the gate of the WRAP facility. All other activities are subject to the requirements of the facility safety analysis reports (FSAR) of the PFP or WRAP facility and requirements of the PDP

  9. A computer program for calculation of reliable pair distribution functions of non-crystalline materials from limited diffraction data. III

    International Nuclear Information System (INIS)

    Hansen, F.Y.

    1978-01-01

    This program calculates the final pair distribution functions of non-crystalline materials on the basis of the experimental structure factor as calculated in part I and the parameters of the small distance part of the pair distribution function as calculated in part II. In this way, truncation error may be eliminated from the final pair distribution function. The calculations with this program depend on the results of calculations with the programs described in parts I and II. The final pair distribution function is calculated by a Fourier transform of a combination of an experimental structure factor and a model structure factor. The storage requirement depends on the number of data points in the structure factor, the number of data points in the final pair distribution function and the number of peaks necessary to resolve the small distance part of the pair distribution function. In the present set-up a storage requirement is set to 8860 words which is estimated to be satisfactory for a large number of cases. (Auth.)

  10. Dynamical calculations for RHEED intensity oscillations

    Science.gov (United States)

    Daniluk, Andrzej

    2005-03-01

    A practical computing algorithm working in real time has been developed for calculating the reflection high-energy electron diffraction from the molecular beam epitaxy growing surface. The calculations are based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. The results of the calculations are presented in the form of rocking curves to illustrate how the diffracted beam intensities depend on the glancing angle of the incident beam. Program summaryTitle of program: RHEED Catalogue identifier:ADUY Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the program has been tested: Windows 9x, XP, NT, Linux Programming language used: Borland C++ Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Distribution format:tar.gz Number of lines in distributed program, including test data, etc.:982 Number of bytes in distributed program, including test data, etc.: 126 051 Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). Nowadays, RHEED is used in many laboratories all over the world where researchers deal with the growth of materials by MBE. The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. In most cases the interpretation of experimental results is based on the use of dynamical diffraction approaches. Such approaches are said to be quite useful in qualitative and

  11. Youth exposure to violence prevention programs in a national sample.

    Science.gov (United States)

    Finkelhor, David; Vanderminden, Jennifer; Turner, Heather; Shattuck, Anne; Hamby, Sherry

    2014-04-01

    This paper assesses how many children and youth have had exposure to programs aimed at preventing various kinds of violence perpetration and victimization. Based on a national sample of children 5-17, 65% had ever been exposed to a violence prevention program, 55% in the past year. Most respondents (71%) rated the programs as very or somewhat helpful. Younger children (5-9) who had been exposed to higher quality prevention programs had lower levels of peer victimization and perpetration. But the association did not apply to older youth or youth exposed to lower quality programs. Disclosure to authorities was also more common for children with higher quality program exposure who had experienced peer victimizations or conventional crime victimizations. The findings are consistent with possible benefits from violence prevention education programs. However, they also suggest that too few programs currently include efficacious components. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. EGS-Ray, a program for the visualization of Monte-Carlo calculations in the radiation physics

    International Nuclear Information System (INIS)

    Kleinschmidt, C.

    2001-01-01

    A Windows program is introduced which allows a relatively easy and interactive access to Monte Carlo techniques in clinical radiation physics. Furthermore, this serves as a visualization tool of the methodology and the results of Monte Carlo simulations. The program requires only little effort to formulate and calculate a Monte Carlo problem. The Monte Carlo module of the program is based on the well-known EGS4/PRESTA code. The didactic features of the program are presented using several examples common to the routine of the clinical radiation physicist. (orig.) [de

  13. Statistical Sampling Handbook for Student Aid Programs: A Reference for Non-Statisticians. Winter 1984.

    Science.gov (United States)

    Office of Student Financial Assistance (ED), Washington, DC.

    A manual on sampling is presented to assist audit and program reviewers, project officers, managers, and program specialists of the U.S. Office of Student Financial Assistance (OSFA). For each of the following types of samples, definitions and examples are provided, along with information on advantages and disadvantages: simple random sampling,…

  14. Hanford Environmental Monitoring Program schedule for samples, analyses, and measurements for calendar year 1985

    International Nuclear Information System (INIS)

    Blumer, P.J.; Price, K.R.; Eddy, P.A.; Carlile, J.M.V.

    1984-12-01

    This report provides the CY 1985 schedule of data collection for the routine Hanford Surface Environmental Monitoring and Ground-Water Monitoring Programs at the Hanford Site. The purpose is to evaluate and report the levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5484.1. The routine sampling schedule provided herein does not include samples scheduled to be collected during FY 1985 in support of special studies, special contractor support programs, or for quality control purposes. In addition, the routine program outlined in this schedule is subject to modification during the year in response to changes in site operations, program requirements, or unusual sample results

  15. Implementation of a Thermodynamic Solver within a Computer Program for Calculating Fission-Product Release Fractions

    Science.gov (United States)

    Barber, Duncan Henry

    During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A

  16. SpekCalc: a program to calculate photon spectra from tungsten anode x-ray tubes

    International Nuclear Information System (INIS)

    Poludniowski, G; Evans, P M; Landry, G; DeBlois, F; Verhaegen, F

    2009-01-01

    A software program, SpekCalc, is presented for the calculation of x-ray spectra from tungsten anode x-ray tubes. SpekCalc was designed primarily for use in a medical physics context, for both research and education purposes, but may also be of interest to those working with x-ray tubes in industry. Noteworthy is the particularly wide range of tube potentials (40-300 kVp) and anode angles (recommended: 6-30 deg.) that can be modelled: the program is therefore potentially of use to those working in superficial/orthovoltage radiotherapy, as well as diagnostic radiology. The utility is free to download and is based on a deterministic model of x-ray spectrum generation (Poludniowski 2007 Med. Phys. 34 2175). Filtration can be applied for seven materials (air, water, Be, Al, Cu, Sn and W). In this note SpekCalc is described and illustrative examples are shown. Predictions are compared to those of a state-of-the-art Monte Carlo code (BEAMnrc) and, where possible, to an alternative, widely-used, spectrum calculation program (IPEM78). (note)

  17. SERKON program for compiling a multigroup library to be used in BETTY calculation

    International Nuclear Information System (INIS)

    Nguyen Phuoc Lan.

    1982-11-01

    A SERKON-type program was written to compile data sets generated by FEDGROUP-3 into a multigroup library for BETTY calculation. A multigroup library was generated from the ENDF/B-IV data file and tested against the TRX-1 and TRX-2 lattices with good results. (author)

  18. SAMPLE RESULTS FROM THE INTEGRATED SALT DISPOSITION PROGRAM MACROBATCH 5 TANK 21H QUALIFICATION MST, ESS AND PODD SAMPLES

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T.; Fink, S.

    2012-04-24

    Savannah River National Laboratory (SRNL) performed experiments on qualification material for use in the Integrated Salt Disposition Program (ISDP) Batch 5 processing. This qualification material was a composite created from recent samples from Tank 21H and archived samples from Tank 49H to match the projected blend from these two tanks. Additionally, samples of the composite were used in the Actinide Removal Process (ARP) and extraction-scrub-strip (ESS) tests. ARP and ESS test results met expectations. A sample from Tank 21H was also analyzed for the Performance Objectives Demonstration Document (PODD) requirements. SRNL was able to meet all of the requirements, including the desired detection limits for all the PODD analytes. This report details the results of the Actinide Removal Process (ARP), Extraction-Scrub-Strip (ESS) and Performance Objectives Demonstration Document (PODD) samples of Macrobatch (Salt Batch) 5 of the Integrated Salt Disposition Program (ISDP).

  19. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  20. Dose calculation for 40K ingestion in samples of beans using spectrometry and MCNP

    International Nuclear Information System (INIS)

    Garcez, R.W.D.; Lopes, J.M.; Silva, A.X.; Domingues, A.M.; Lima, M.A.F.

    2014-01-01

    A method based on gamma spectroscopy and on the use of voxel phantoms to calculate dose due to ingestion of 40 K contained in bean samples are presented in this work. To quantify the activity of radionuclide, HPGe detector was used and the data entered in the input file of MCNP code. The highest value of equivalent dose was 7.83 μSv.y -1 in the stomach for white beans, whose activity 452.4 Bq.Kg -1 was the highest of the five analyzed. The tool proved to be appropriate when you want to calculate the dose in organs due to ingestion of food. (author)

  1. MCNP calculation for calibration curve of X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Tan Chunming; Wu Zhifang; Guo Xiaojing; Xing Guilai; Wang Zhentao

    2011-01-01

    Due to the compositional variation of the sample, linear relationship between the element concentration and fluorescent intensity will not be well maintained in most X-ray fluorescence analysis. To overcome this, we use MCNP program to simulate fluorescent intensity of Fe (0∼100% concentration range) within binary mixture of Cr and O which represent typical strong absorption and weak absorption conditions respectively. The theoretic calculation shows that the relationship can be described as a curve determined by parameter p and value of p can be obtained with given absorption coefficient of substrate elements and element under detection. MCNP simulation results are consistent with theoretic calculation. Our research reveals that MCNP program can calculate the Calibration Curve of X-ray fluorescence very well. (authors)

  2. Calculation of upper confidence bounds on not-sampled vegetation types using a systematic grid sample: An application to map unit definition for existing vegetation maps

    Science.gov (United States)

    Paul L. Patterson; Mark Finco

    2009-01-01

    This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...

  3. ALBEMO, a program for the calculation of the radiation transport in void volumes with reflecting walls

    International Nuclear Information System (INIS)

    Mueller, K.; Vossebrecker, H.

    The Monte Carlo Program ALBEMO calculates the distribution of neutrons and gamma rays in void volumes which are bounded by reflecting walls with x, y, z coordinates. The program is based on the albedo method. The effect of significant simplifying assumptions is investigated. Comparisons with experiments show satisfying agreement

  4. The Influence of Using TI-84 Calculators with Programs on Algebra I High Stakes Examinations

    Science.gov (United States)

    Spencer, Misty

    2013-01-01

    The purpose of this study was to determine if there was a significant difference in scores on the Mississippi Algebra I SATP2 when one group was allowed to use programs and the other group was not allowed to use programs on TI-84 calculators. An additional purpose of the study was also to determine if there was a significant difference in the…

  5. Computer program for calculation of complex chemical equilibrium compositions and applications. Supplement 1: Transport properties

    Science.gov (United States)

    Gordon, S.; Mcbride, B.; Zeleznik, F. J.

    1984-01-01

    An addition to the computer program of NASA SP-273 is given that permits transport property calculations for the gaseous phase. Approximate mixture formulas are used to obtain viscosity and frozen thermal conductivity. Reaction thermal conductivity is obtained by the same method as in NASA TN D-7056. Transport properties for 154 gaseous species were selected for use with the program.

  6. A program for monitor unit calculation for high energy photon beams in isocentric condition based on measured data

    International Nuclear Information System (INIS)

    Gesheva-Atanasova, N.

    2008-01-01

    The aim of this study is: 1) to propose a procedure and a program for monitor unit calculation for radiation therapy with high energy photon beams, based on data measured by author; 2) to compare this data with published one and 3) to evaluate the precision of the monitor unit calculation program. From this study it could be concluded that, we reproduced with a good agreement the published data, except the TPR values for dept up to 5 cm. The measured relative weight of upper and lower jaws - parameter A was dramatically different from the published data, but perfectly described the collimator exchange effect for our treatment machine. No difference was found between the head scatter ratios, measured in a mini phantom and those measured with a proper brass buildup cap. Our monitor unit calculation program was found to be reliable and it can be applied for check up of the patient's plans for irradiation with high energy photon beams and for some fast calculations. Because of the identity in the construction, design and characteristics of the Siemens accelerators, and the agreement with the published data for the same beam qualities, we hope that most of our experimental data and this program can be used after verification in other hospitals

  7. User's manual of BISHOP. A Bi-Phase, Sodium-Hydrogen-Oxygen system, chemical equilibrium calculation program

    International Nuclear Information System (INIS)

    Okano, Yasushi; Yamaguchi, Akira

    2001-07-01

    In an event of sodium leakage in liquid metal fast breeder reactors, liquid sodium flows out of piping, and droplet combustion might occur under a certain environmental condition. The combustion heat and reaction products should be evaluated in the sodium fire analysis codes for investigating the influence of the sodium leak age and fire incident. In order to analyze the reaction heat and products, the multi-phase chemical equilibrium calculation program for a sodium, oxygen and hydrogen system has been developed. The developed numerical program is named BISHOP, which denotes 'Bi-Phase, Sodium-Hydrogen-Oxygen, Chemical Equilibrium Calculation Program'. The Gibbs free energy minimization method is used because of the following advantages. Chemical species are easily added and changed. A variety of thermodynamic states, such as isothermal and isentropic changes, can be dealt with in addition to constant temperature and pressure processes. In applying the free energy minimization method to solve the multi-phase sodium reaction system, three new numerical calculation techniques are developed. One is theoretical simplification of phase description in equation system, the other is to extend the Gibbs free energy minimization method to a multi-phase system, and the last is to establish the efficient search for the minimum value. The reaction heat and products at the equilibrium state can be evaluated from the initial conditions, such as temperature, pressure and reactants, using BISHOP. This report describes the thermochemical basis of chemical equilibrium calculations, the system of equations, simplification models, and the procedure to prepare input data and usage of BISHOP. (author)

  8. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  9. Environmental sampling program for a solar evaporation pond for liquid radioactive wastes

    International Nuclear Information System (INIS)

    Romero, R.; Gunderson, T.C.; Talley, A.D.

    1980-04-01

    Los Alamos Scientific Laboratory (LASL) is evaluating solar evaporation as a method for disposal of liquid radioactive wastes. This report describes a sampling program designed to monitor possible escape of radioactivity to the environment from a solar evaporation pond prototype constructed at LASL. Background radioactivity levels at the pond site were determined from soil and vegetation analyses before construction. When the pond is operative, the sampling program will qualitatively and quantitatively detect the transport of radioactivity to the soil, air, and vegetation in the vicinity. Possible correlation of meteorological data with sampling results is being investigated and measures to control export of radioactivity by biological vectors are being assessed

  10. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  11. Calculation of self-shielding factors for cross-sections in the unresolved resonance region using the GRUCON applied program package

    International Nuclear Information System (INIS)

    Sinitsa, V.V.

    1984-11-01

    The author gives a scheme for the calculation of the self-shielding factors in the unresolved resonance region using the GRUCON applied program package. This package is especially created to be used in the conversion of evaluated neutron cross-section data, as available in existing data libraries, into multigroup microscopic constants. A detailed description of the formulae and algorithms used in the programs is given. Some typical examples of calculation are considered and the results are compared with those of other authors. The calculation accuracy is better than 2%

  12. PROLIB: code to create production library of nuclear data for design calculations

    International Nuclear Information System (INIS)

    Wittkopf, W.A.; Tilford, J.M.; Furtney, M.

    1977-02-01

    The PROLIB program creates, updates, and edits the production library used in the B and W nuclear design system. The production library contains the material cross section data required to perform the thermal and epithermal spectrum calculations in the NULIF program. PROLIB collapses cross section data from the master libraries, produced by the ETOGM and THOR programs, to the desired production library group structures. The physics models that are used, the calculations that are performed in PROLIB, the input, and the output are described. Information that is required to use PROLIB along with a sample problem that illustrates the input and output formats and that provides a benchmark problem are given

  13. Test calculations of physical parameters of the TRX,BETTIS and MIT critical assemblies according to the TRIFON program

    International Nuclear Information System (INIS)

    Kochurov, B.P.

    1980-01-01

    Results of calculations of physical parameters characterizing the TRX, MIT and BETTIS critical assemblies obtained according to the program TRIFON are presented. The program TRIFON permits to calculate the space-energy neutron distribution in the multigroup approximation in a multizone cylindrical cell. Results of comparison of the TRX, BETTIS and MIT crytical assembly parameters with experimental data and calculational results according to the Monte Carlo method are presented as well. Deviations of the parameters are in the range of 1.5-2 of experimental errors. Data on the interference of uranium 238 levels in the resonant neutron absorption in the cell are given [ru

  14. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    Science.gov (United States)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  15. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    Science.gov (United States)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  16. Data acquisition and processing for flame spectrophotometry using a programmable desk calculator

    International Nuclear Information System (INIS)

    Hurteau, M.T.; Ashley, R.W.

    1976-02-01

    A programmable calculator has been used to provide automatic data acquisition and processing for flame spectrophotometric measurements. When coupled with an automatic wavelength selector, complete automation of sample analysis is provided for one or more elements in solution. The program takes into account deviation of analytical curves from linearity. Increased sensitivity and precision over manual calculations are obtained. (author)

  17. Temperature programmed retention indices : calculation from isothermal data Part 2: Results with nonpolar columns

    NARCIS (Netherlands)

    Curvers, J.M.P.M.; Rijks, J.A.; Cramers, C.A.M.G.; Knauss, K.; Larson, P.

    1985-01-01

    The procedure for calculating linear temperature programmed indices as described in part 1 has been evaluated using five different nonpolar columns, with OV-1 as the stationary phase. For fourty-three different solutes covering five different classes of components, including n-alkanes and

  18. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  19. Development and application of the PCRELAP5 - Data Calculation Program for RELAP 5 Code

    International Nuclear Information System (INIS)

    Silvestre, Larissa J.B.; Sabundjian, Gaianê

    2017-01-01

    Nuclear accidents in the world led to the establishment of rigorous criteria and requirements for nuclear power plant operations by the international regulatory bodies. By using specific computer programs, simulations of various accidents and transients likely to occur at any nuclear power plant are required for certifying and licensing a nuclear power plant. Some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most widely used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors in Brazil and worldwide. A major difficulty in the simulation by using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. Thus, for those calculations performance and preparation of RELAP5 input data, a friendly mathematical preprocessor was designed. The Visual Basic for Application (VBA) for Microsoft Excel demonstrated to be an effective tool to perform a number of tasks in the development of the program. In order to meet the needs of RELAP5 users, the RELAP5 Calculation Program (Programa de Cálculo do RELAP5 – PCRELAP5) was designed. The components of the code were codified; all entry cards including the optional cards of each one have been programmed. An English version for PCRELAP5 was provided. Furthermore, a friendly design was developed in order to minimize the time of preparation of input data and errors committed by users. The final version of this preprocessor was successfully applied for Safety Injection System (SIS) of Angra-2. (author)

  20. Development and application of the PCRELAP5 - Data Calculation Program for RELAP 5 Code

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre, Larissa J.B.; Sabundjian, Gaianê, E-mail: larissajbs@usp.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    Nuclear accidents in the world led to the establishment of rigorous criteria and requirements for nuclear power plant operations by the international regulatory bodies. By using specific computer programs, simulations of various accidents and transients likely to occur at any nuclear power plant are required for certifying and licensing a nuclear power plant. Some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most widely used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors in Brazil and worldwide. A major difficulty in the simulation by using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. Thus, for those calculations performance and preparation of RELAP5 input data, a friendly mathematical preprocessor was designed. The Visual Basic for Application (VBA) for Microsoft Excel demonstrated to be an effective tool to perform a number of tasks in the development of the program. In order to meet the needs of RELAP5 users, the RELAP5 Calculation Program (Programa de Cálculo do RELAP5 – PCRELAP5) was designed. The components of the code were codified; all entry cards including the optional cards of each one have been programmed. An English version for PCRELAP5 was provided. Furthermore, a friendly design was developed in order to minimize the time of preparation of input data and errors committed by users. The final version of this preprocessor was successfully applied for Safety Injection System (SIS) of Angra-2. (author)

  1. GenLocDip: A Generalized Program to Calculate and Visualize Local Electric Dipole Moments.

    Science.gov (United States)

    Groß, Lynn; Herrmann, Carmen

    2016-09-30

    Local dipole moments (i.e., dipole moments of atomic or molecular subsystems) are essential for understanding various phenomena in nanoscience, such as solvent effects on the conductance of single molecules in break junctions or the interaction between the tip and the adsorbate in atomic force microscopy. We introduce GenLocDip, a program for calculating and visualizing local dipole moments of molecular subsystems. GenLocDip currently uses the Atoms-In-Molecules (AIM) partitioning scheme and is interfaced to various AIM programs. This enables postprocessing of a variety of electronic structure output formats including cube and wavefunction files, and, in general, output from any other code capable of writing the electron density on a three-dimensional grid. It uses a modified version of Bader's and Laidig's approach for achieving origin-independence of local dipoles by referring to internal reference points which can (but do not need to be) bond critical points (BCPs). Furthermore, the code allows the export of critical points and local dipole moments into a POVray readable input format. It is particularly designed for fragments of large systems, for which no BCPs have been calculated for computational efficiency reasons, because large interfragment distances prevent their identification, or because a local partitioning scheme different from AIM was used. The program requires only minimal user input and is written in the Fortran90 programming language. To demonstrate the capabilities of the program, examples are given for covalently and non-covalently bound systems, in particular molecular adsorbates. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. RAFT: a computer program for fault tree risk calculations

    International Nuclear Information System (INIS)

    Seybold, G.D.

    1977-11-01

    A description and user instructions are presented for RAFT, a FORTRAN computer code for calculation of a risk measure for fault tree cut sets. RAFT calculates release quantities and a risk measure based on the product of probability and release quantity for cut sets of fault trees modeling the accidental release of radioactive material from a nuclear fuel cycle facility. Cut sets and their probabilities are supplied as input to RAFT from an external fault tree analysis code. Using the total inventory available of radioactive material, along with release fractions for each event in a cut set, the release terms are calculated for each cut set. Each release term is multiplied by the cut set probability to yield the cut set risk measure. RAFT orders the dominant cut sets on the risk measure. The total risk measure of processed cut sets and their fractional contributions are supplied as output. Input options are available to eliminate redundant cut sets, apply threshold values on cut set probability and risk, and control the total number of cut sets output. Hash addressing is used to remove redundant cut sets from the analysis. Computer hardware and software restrictions are given along with a sample problem and cross-reference table of the code. Except for the use of file management utilities, RAFT is written exclusively in FORTRAN language and is operational on a Control Data, CYBER 74-18--series computer system. 4 figures

  3. First-trimester risk calculation for trisomy 13, 18, and 21: comparison of the screening efficiency between 2 locally developed programs and commercial software

    DEFF Research Database (Denmark)

    Sørensen, Steen; Momsen, Günther; Sundberg, Karin

    2011-01-01

    -A) in maternal plasma from unaffected pregnancies. Means and SDs of these parameters in unaffected and affected pregnancies are used in the risk calculation program. Unfortunately, our commercial program for risk calculation (Astraia) did not allow use of local medians. We developed 2 alternative risk...... calculation programs to assess whether the screening efficacies for T13, T18, and T21 could be improved by using our locally estimated medians....

  4. Helical tomotherapy shielding calculation for an existing LINAC treatment room: sample calculation and cautions

    International Nuclear Information System (INIS)

    Wu Chuan; Guo Fanqing; Purdy, James A

    2006-01-01

    This paper reports a step-by-step shielding calculation recipe for a helical tomotherapy unit (TomoTherapy Inc., Madison, WI, USA), recently installed in an existing Varian 600C treatment room. Both primary and secondary radiations (leakage and scatter) are explicitly considered. A typical patient load is assumed. Use factor is calculated based on an analytical formula derived from the tomotherapy rotational beam delivery geometry. Leakage and scatter are included in the calculation based on corresponding measurement data as documented by TomoTherapy Inc. Our calculation result shows that, except for a small area by the therapists' console, most of the existing Varian 600C shielding is sufficient for the new tomotherapy unit. This work cautions other institutions facing the similar situation, where an HT unit is considered for an existing LINAC treatment room, more secondary shielding might be considered at some locations, due to the significantly increased secondary shielding requirement by HT. (note)

  5. An adaptive Monte Carlo method under emission point as sampling station for deep penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Yang, Shulin; Pei, Lucheng

    2011-01-01

    Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)

  6. Self-consistent RPA calculations with Skyrme-type interactions: The skyrme_rpa program

    Science.gov (United States)

    Colò, Gianluca; Cao, Ligang; Van Giai, Nguyen; Capelli, Luigi

    2013-01-01

    Random Phase Approximation (RPA) calculations are nowadays an indispensable tool in nuclear physics studies. We present here a complete version implemented with Skyrme-type interactions, with the spherical symmetry assumption, that can be used in cases where the effects of pairing correlations and of deformation can be ignored. The full self-consistency between the Hartree-Fock mean field and the RPA excitations is enforced, and it is numerically controlled by comparison with energy-weighted sum rules. The main limitations are that charge-exchange excitations and transitions involving spin operators are not included in this version. Program summaryProgram title: skyrme_rpa (v 1.00) Catalogue identifier: AENF_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5531 No. of bytes in distributed program, including test data, etc.: 39435 Distribution format: tar.gz Programming language: FORTRAN-90/95; easily downgradable to FORTRAN-77. Computer: PC with Intel Celeron, Intel Pentium, AMD Athlon and Intel Core Duo processors. Operating system: Linux, Windows. RAM: From 4 MBytes to 150 MBytes, depending on the size of the nucleus and of the model space for RPA. Word size: The code is written with a prevalent use of double precision or REAL(8) variables; this assures 15 significant digits. Classification: 17.24. Nature of problem: Systematic observations of excitation properties in finite nuclear systems can lead to improved knowledge of the nuclear matter equation of state as well as a better understanding of the effective interaction in the medium. This is the case of the nuclear giant resonances and low-lying collective excitations, which can be described as small amplitude collective motions in the framework of

  7. Efigie: a computer program for calculating end-isotope accumulation by neutron irradiation and radioactive decay

    International Nuclear Information System (INIS)

    Ropero, M.

    1978-01-01

    Efigie is a program written in Fortran V which can calculate the concentration of radionuclides produced by neutron irradiation of a target made of either a single isotope or several isotopes. The program includes optimization criteria that can be applied when the goal is the production of a single nuclide. The effect of a cooling time before chemical processing of the target is also accounted for.(author) [es

  8. A computer program (COSTUM) to calculate confidence intervals for in situ stress measurements. V. 1

    International Nuclear Information System (INIS)

    Dzik, E.J.; Walker, J.R.; Martin, C.D.

    1989-03-01

    The state of in situ stress is one of the parameters required both for the design and analysis of underground excavations and for the evaluation of numerical models used to simulate underground conditions. To account for the variability and uncertainty of in situ stress measurements, it is desirable to apply confidence limits to measured stresses. Several measurements of the state of stress along a borehole are often made to estimate the average state of stress at a point. Since stress is a tensor, calculating the mean stress and confidence limits using scalar techniques is inappropriate as well as incorrect. A computer program has been written to calculate and present the mean principle stresses and the confidence limits for the magnitudes and directions of the mean principle stresses. This report describes the computer program, COSTUM

  9. Computer Programs for Uncertainty Analysis of Solubility Calculations: Windows Version and Other Updates of the SENVAR and UNCCON. Program Description and Handling Instructions

    International Nuclear Information System (INIS)

    Ekberg, Christian; Oedegaard Jensen, Arvid

    2004-04-01

    Uncertainty and sensitivity analysis is becoming more and more important for testing the reliability of computer predictions. Solubility estimations play important roles for, e.g. underground repositories for nuclear waste, other hazardous materials as well as simple dissolution problems in general or industrial chemistry applications. The calculated solubility of a solid phase is dependent on several input data, e.g. the stability constants for the complexes formed in the solution, the enthalpies of reaction for the formation of these complexes and also the content of other elements in the water used for the dissolution. These input data are determined with more or less accuracy and thus the results of the calculations are uncertain. For the purpose of investigating the effects of these uncertainties several computer programs were developed in the 1990s, e.g. SENVAR, MINVAR and UNCCON. Of these SENVAR and UNCCON now exist as windows programs based on a newer speciation code. In this report we have given an explanation of how the codes work and also given some test cases as handling instructions. The results are naturally similar to the previous ones but the advantages are easier handling and more stable solubility calculations. With these improvements the programs presented here will be more publically accessible

  10. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    provide the user with programs to calculate and incorporate errors into sample size estimation.

  11. DITTY - a computer program for calculating population dose integrated over ten thousand years

    International Nuclear Information System (INIS)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.

    1986-03-01

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages

  12. Calculation of absolute protein-ligand binding free energy using distributed replica sampling.

    Science.gov (United States)

    Rodinger, Tomas; Howell, P Lynne; Pomès, Régis

    2008-10-21

    Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.

  13. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  14. DETEF a Monte Carlo system for the calculation of gamma spectrometers efficiency

    International Nuclear Information System (INIS)

    Cornejo, N.; Mann, G.

    1996-01-01

    The Monte Carlo method program DETEF calculates the efficiency of cylindrical NaI, Csi, Ge or Si, detectors for photons energy until 2 MeV and several sample geometric. These sources could be punctual, plane cylindrical or rectangular. The energy spectrum appears on the screen simultaneously with the statistical simulation. The calculated and experimental estimated efficiencies coincidence well in the standards deviations intervals

  15. Transport calculation of neutron flux distribution in reflector of PW reactor

    International Nuclear Information System (INIS)

    Remec, I.

    1982-01-01

    Two-dimensional transport calculation of the neutron flux and spectrum in the equatorial plain of PW reactor, using computer program DOT 3, is presented. Results show significant differences between neutron fields in which test samples and reactor vessel are exposed. (author)

  16. A user-friendly robotic sample preparation program for fully automated biological sample pipetting and dilution to benefit the regulated bioanalysis.

    Science.gov (United States)

    Jiang, Hao; Ouyang, Zheng; Zeng, Jianing; Yuan, Long; Zheng, Naiyu; Jemal, Mohammed; Arnold, Mark E

    2012-06-01

    Biological sample dilution is a rate-limiting step in bioanalytical sample preparation when the concentrations of samples are beyond standard curve ranges, especially when multiple dilution factors are needed in an analytical run. We have developed and validated a Microsoft Excel-based robotic sample preparation program (RSPP) that automatically transforms Watson worklist sample information (identification, sequence and dilution factor) to comma-separated value (CSV) files. The Freedom EVO liquid handler software imports and transforms the CSV files to executable worklists (.gwl files), allowing the robot to perform sample dilutions at variable dilution factors. The dynamic dilution range is 1- to 1000-fold and divided into three dilution steps: 1- to 10-, 11- to 100-, and 101- to 1000-fold. The whole process, including pipetting samples, diluting samples, and adding internal standard(s), is accomplished within 1 h for two racks of samples (96 samples/rack). This platform also supports online sample extraction (liquid-liquid extraction, solid-phase extraction, protein precipitation, etc.) using 96 multichannel arms. This fully automated and validated sample dilution and preparation process has been applied to several drug development programs. The results demonstrate that application of the RSPP for fully automated sample processing is efficient and rugged. The RSPP not only saved more than 50% of the time in sample pipetting and dilution but also reduced human errors. The generated bioanalytical data are accurate and precise; therefore, this application can be used in regulated bioanalysis.

  17. Finding needles in a haystack: a methodology for identifying and sampling community-based youth smoking cessation programs.

    Science.gov (United States)

    Emery, Sherry; Lee, Jungwha; Curry, Susan J; Johnson, Tim; Sporer, Amy K; Mermelstein, Robin; Flay, Brian; Warnecke, Richard

    2010-02-01

    Surveys of community-based programs are difficult to conduct when there is virtually no information about the number or locations of the programs of interest. This article describes the methodology used by the Helping Young Smokers Quit (HYSQ) initiative to identify and profile community-based youth smoking cessation programs in the absence of a defined sample frame. We developed a two-stage sampling design, with counties as the first-stage probability sampling units. The second stage used snowball sampling to saturation, to identify individuals who administered youth smoking cessation programs across three economic sectors in each county. Multivariate analyses modeled the relationship between program screening, eligibility, and response rates and economic sector and stratification criteria. Cumulative logit models analyzed the relationship between the number of contacts in a county and the number of programs screened, eligible, or profiled in a county. The snowball process yielded 9,983 unique and traceable contacts. Urban and high-income counties yielded significantly more screened program administrators; urban counties produced significantly more eligible programs, but there was no significant association between the county characteristics and program response rate. There is a positive relationship between the number of informants initially located and the number of programs screened, eligible, and profiled in a county. Our strategy to identify youth tobacco cessation programs could be used to create a sample frame for other nonprofit organizations that are difficult to identify due to a lack of existing directories, lists, or other traditional sample frames.

  18. Fast neutron fluence calculations as support for a BWR pressure vessel and internals surveillance program

    International Nuclear Information System (INIS)

    Lucatero, Marco A.; Palacios-Hernandez, Javier C.; Ortiz-Villafuerte, Javier; Xolocostli-Munguia, J. Vicente; Gomez-Torres, Armando M.

    2010-01-01

    Materials surveillance programs are required to detect and prevent degradation of safety-related structures and components of a nuclear power reactor. In this work, following the directions in the Regulatory Guide 1.190, a calculational methodology is implemented as additional support for a reactor pressure vessel and internals surveillance program for a BWR. The choice of the neutronic methods employed was based on the premise of being able of performing all the expected future survey calculations in relatively short times, but without compromising accuracy. First, a geometrical model of a typical BWR was developed, from the core to the primary containment, including jet pumps and all other structures. The methodology uses the Synthesis Method to compute the three-dimensional neutron flux distribution. In the methodology, the code CORE-MASTER-PRESTO is used as the three-dimensional core simulator; SCALE is used to generate the fine-group flux spectra of the components of the model and also used to generate a 47 energy-groups job cross section library, collapsed from the 199-fine-group master library VITAMIN-B6; ORIGEN2 was used to compute the isotopic densities of uranium and plutonium; and, finally, DORT was used to calculate the two-dimensional and one-dimensional neutron flux distributions required to compute the synthesized three-dimensional neutron flux. Then, the calculation of fast neutron fluence was performed using the effective full power time periods through six operational fuel cycles of two BWR Units and until the 13th cycle for Unit 1. The results showed a maximum relative difference between the calculated-by-synthesis fast neutron fluxes and fluences and those measured by Fe, Cu and Ni dosimeters less than 7%. The dosimeters were originally located adjacent to the pressure vessel wall, as part of the surveillance program. Results from the computations of peak fast fluence on pressure vessel wall and specific weld locations on the core shroud are

  19. Calculation programs as a didactic generator of the discipline “Fundamentals of rocket and space techniques”

    Directory of Open Access Journals (Sweden)

    Konstantin P. Baslyk

    2018-01-01

    Full Text Available A new method of teaching the subject “Fundamentals of rocket and space techniques” was suggested in the paper. This method is using the specialized calculation programs as a didactic tool for forming the educational material not only for practical training, but for the theoretical course too. A brief review of the educational literature on rocket and space techniques, published over the past decades was made. Organizational and methodological problems, associated with the teaching discipline are indicated: to define the educational material volume and content, the need to establish interdisciplinary connections, the search of tasks, which have numerical initial data and solution.The overview of pedagogical technologies and the requirements for modern didactic tools is made. On the basis of this analysis the principles of developing a new didactic tool are formulated. The educational technology with the educational process formation on the ahead basis is used. The knowledge is represented in a collapsed form. The combination of procedures for modeling and analysis of the knowledge is realized. The visualization of knowledge is achieved by considering of numerical illustration. The inductive synthesis and deductive analysis are used as psychological and pedagogical methods, as well as the formation of problematic situations.Three specialized calculation programs are used in the implementation of this didactic tool. The programs are: TERRA (B.Trusov – the calculation of chemical and phase equilibrium of multicomponent systems; RK1 (N.Generalov – the calculation of flight characteristics and geometrical parameters of the single-stage ballistic missile with liquid rocket engine; TRIJ1 (N.Generalov – the calculation of trajectory for leading out the payload of the single-stage ballistic missile. The study of the subject begins with the programs interfaces studying and tests performing. After that, the content of programs TERRA, RK1 and TRIJ1

  20. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    Energy Technology Data Exchange (ETDEWEB)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2011-07-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  1. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    International Nuclear Information System (INIS)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G.; Silva, Ademir X.

    2011-01-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  2. Brine Sampling and Evaluation Program

    International Nuclear Information System (INIS)

    Deal, D.E.; Case, J.B.; Deshler, R.M.; Drez, P.E.; Myers, J.; Tyburski, J.R.

    1987-12-01

    The Brine Sampling and Evaluation Program (BSEP) Phase II Report is an interim report which updates the data released in the BSEP Phase I Report. Direct measurements and observations of the brine that seeps into the WIPP repository excavations were continued through the period between August 1986 and July 1987. That data is included in Appendix A, which extends the observation period for some locations to approximately 900 days. Brine observations at 87 locations are presented in this report. Although WIPP underground workings are considered ''dry,'' small amounts of brine are present. Part of that brine migrates into the repository in response to pressure gradients at essentially isothermal conditions. The data presented in this report is a continuation of moisture content studies of the WIPP facility horizon that were initiated in 1982, as soon as underground drifts began to be excavated. Brine seepages are manifested by salt efflorescences, moist areas, and fluid accumulations in drillholes. 35 refs., 6 figs., 11 tabs

  3. Assessment model validity document. NAMMU: A program for calculating groundwater flow and transport through porous media

    International Nuclear Information System (INIS)

    Cliffe, K.A.; Morris, S.T.; Porter, J.D.

    1998-05-01

    NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised

  4. User's guide to SERICPAC: A computer program for calculating electric-utility avoided costs rates

    Energy Technology Data Exchange (ETDEWEB)

    Wirtshafter, R.; Abrash, M.; Koved, M.; Feldman, S.

    1982-05-01

    SERICPAC is a computer program developed to calculate average avoided cost rates for decentralized power producers and cogenerators that sell electricity to electric utilities. SERICPAC works in tandem with SERICOST, a program to calculate avoided costs, and determines the appropriate rates for buying and selling of electricity from electric utilities to qualifying facilities (QF) as stipulated under Section 210 of PURA. SERICPAC contains simulation models for eight technologies including wind, hydro, biogas, and cogeneration. The simulations are converted in a diversified utility production which can be either gross production or net production, which accounts for an internal electricity usage by the QF. The program allows for adjustments to the production to be made for scheduled and forced outages. The final output of the model is a technology-specific average annual rate. The report contains a description of the technologies and the simulations as well as complete user's guide to SERICPAC.

  5. Dose calculation algorithm for the Department of Energy Laboratory Accreditation Program

    International Nuclear Information System (INIS)

    Moscovitch, M.; Tawil, R.A.; Thompson, D.; Rhea, T.A.

    1991-01-01

    The dose calculation algorithm for a symmetric four-element LiF:Mg,Ti based thermoluminescent dosimeter is presented. The algorithm is based on the parameterization of the response of the dosimeter when exposed to both pure and mixed fields of various types and compositions. The experimental results were then used to develop the algorithm as a series of empirical response functions. Experiments to determine the response of the dosimeter and to test the dose calculation algorithm were performed according to the standard established by the Department of Energy Laboratory Accreditation Program (DOELAP). The test radiation fields include: 137 Cs gamma rays, 90 Sr/ 90 Y and 204 Tl beta particles, low energy photons of 20-120 keV and moderated 252 Cf neutron fields. The accuracy of the system has been demonstrated in an official DOELAP blind test conducted at Sandia National Laboratory. The test results were well within DOELAP tolerance limits. The results of this test are presented and discussed

  6. EML Surface Air Sampling Program, 1990--1993 data

    International Nuclear Information System (INIS)

    Larsen, R.J.; Sanderson, C.G.; Kada, J.

    1995-11-01

    Measurements of the concentrations of specific atmospheric radionuclides in air filter samples collected for the Environmental Measurements Laboratory's Surface Air Sampling Program (SASP) during 1990--1993, with the exception of April 1993, indicate that anthropogenic radionuclides, in both hemispheres, were at or below the lower limits of detection for the sampling and analytical techniques that were used to collect and measure them. The occasional detection of 137 Cs in some air filter samples may have resulted from resuspension of previously deposited debris. Following the April 6, 1993 accident and release of radionuclides into the atmosphere at a reprocessing plant in the Tomsk-7 military nuclear complex located 16 km north of the Siberian city of Tomsk, Russia, weekly air filter samples from Barrow, Alaska; Thule, Greenland and Moosonee, Canada were selected for special analyses. The naturally occurring radioisotopes that the authors measure, 7 Be and 210 Pb, continue to be detected in most air filter samples. Variations in the annual mean concentrations of 7 Be at many of the sites appear to result primarily from changes in the atmospheric production rate of this cosmogenic radionuclide. Short-term variations in the concentrations of 7 Be and 210 Pb continued to be observed at many sites at which weekly air filter samples were analyzed. The monthly gross gamma-ray activity and the monthly mean surface air concentrations of 7 Be, 95 Zr, 137 Cs, 144 Ce, and 210 Pb measured at sampling sites in SASP during 1990--1993 are presented. The weekly mean surface air concentrations of 7 Be, 95 Zr, 137 Cs, 144 Ce, and 210 Pb for samples collected during 1990--1993 are given for 17 sites

  7. 78 FR 23896 - Notice of Funds Availability: Inviting Applications for the Quality Samples Program

    Science.gov (United States)

    2013-04-23

    ... proposals for the 2014 Quality Samples Program (QSP). The intended effect of this notice is to solicit... Strategy (UES) application Internet Web site. The UES allows applicants to submit a single consolidated and... of the FAS marketing programs, financial assistance programs, and market access programs. The...

  8. Sample collection: an overview of the Hydrogeochemical and Stream Sediment Reconnaissance Program

    International Nuclear Information System (INIS)

    Bolivar, S.L.

    1979-01-01

    A Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) for uranium is currently being conducted throughout the conterminous United States and Alaska. The HSSR is part of the National Uranium Resource Evaluation sponsored by the US Department of Energy. This ambitious geochemical reconnaissance program is conducted by four national laboratories: Los Alamos Scientific Laboratory, Lawrence Livermore Laboratory, Oak Ridge Gaseous Diffusion Plant, and Savannah River Laboratory. The program is based on an extensive review of world literature, reconnaissance work done in other countries, and pilot studies conducted by each laboratory. Sample-collection methods and sample density are determined to optimize the probability of detecting potential uranium mineralization. To achieve this aim, each laboratory has developed independent standardized field collection procedures that are designed for its section of the country. Field parameters such as pH, conductivity, climate, geography, and geology are recorded at each site. Most samples are collected at densities of one sample site per 10 to 23 km 2 . The HSSR program has helped to improve existing hydrogeochemical reconnaissance exploration techniques. In addition to providing industry with data that may help to identify potential uranium districts and to extend known uranium provinces, the HSSR also provides multi-element analytical data, which can be used in water quality, soil, sediment, environmental, and base-metal exploration studies

  9. Middlesex Sampling Plant environmental report for calendar year 1992, 239 Mountain Avenue, Middlesex, New Jersey. Formerly Utilized Sites Remedial Action Program (FUSRAP)

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report describes the environmental surveillance program at the Middlesex Sampling Plant (MSP) and provides the results for 1992. The site, in the Borough of Middlesex, New Jersey, is a fenced area and includes four buildings and two storage piles that contain 50,800 m{sup 3} of radioactive and mixed hazardous waste. More than 70 percent of the MSP site is paved with asphalt. The MSP facility was established in 1943 by the Manhattan Engineer District (MED) to sample, store, and/or ship uranium, thorium, and beryllium ores. In 1955 the Atomic Energy Commission (AEC), successor to MED, terminated the operation and later used the site for storage and limited sampling of thorium residues. In 1967 AEC activities ceased, onsite structures were decontaminated, and the site was certified for unrestricted use under criteria applicable at that time. In 1980 the US Department of Energy (DOE) initiated a multiphase remedial action project to clean up several vicinity properties onto which contamination from the plant had migrated. Material from these properties was consolidated into the storage piles onsite. Environmental surveillance of MSP began in 1980 when Congress added the site to DOE`s Formerly Utilized Sites Remedial Action Program. The environmental surveillance program at MSP includes sampling networks for radon and thoron in air; external gamma radiation exposure; and radium-226, radium-228, thorium-230, thorium-232, and total uranium in surface water, sediment, and groundwater. Additionally, chemical analyses are performed to detect metals and organic compounds in surface water and groundwater and metals in sediments. This program assists in fulfilling th DOE policy of measuring and monitoring effluents from DOE activities and calculating hypothetical doses.

  10. Application of the opportunities of tool system 'CUDA' for graphic processors programming in scientific and technical calculation tasks

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Sereda, T.M.; Us, S.A.; Shestakov, M.V.

    2009-01-01

    The opportunities of technology CUDA (Compute Unified Device Architecture - the unified hardware-software decision for parallel calculations on GPU)of the company NVIDIA were described. The basic differences of the programming language 'C' for GPU from 'usual' language 'C' were selected. The examples of CUDA usage for acceleration of development of applications and realization of algorithms of scientific and technical calculations were given which are carried out by the means of graphic processors (GPGPU) of accelerators GeForce of the eighth generation. The recommendations on optimization of the programs using GPU were resulted.

  11. Calculation and evaluation methodology of the flawed pipe and the compute program development

    International Nuclear Information System (INIS)

    Liu Chang; Qian Hao; Yao Weida; Liang Xingyun

    2013-01-01

    Background: The crack will grow gradually under alternating load for a pressurized pipe, whereas the load is less than the fatigue strength limit. Purpose: Both calculation and evaluation methodology for a flawed pipe that have been detected during in-service inspection is elaborated here base on the Elastic Plastic Fracture Mechanics (EPFM) criteria. Methods: In the compute, the depth and length interaction of a flaw has been considered and a compute program is developed per Visual C++. Results: The fluctuating load of the Reactor Coolant System transients, the initial flaw shape, the initial flaw orientation are all accounted here. Conclusions: The calculation and evaluation methodology here is an important basis for continue working or not. (authors)

  12. d'plus: A program to calculate accuracy and bias measures from detection and discrimination data.

    Science.gov (United States)

    Macmillan, N A; Creelman, C D

    1997-01-01

    The program d'plus calculates accuracy (sensitivity) and response-bias parameters using Signal Detection Theory. Choice Theory, and 'nonparametric' models. is is appropriate for data from one-interval, two- and three-interval forced-choice, same different, ABX, and oddity experimental paradigms.

  13. FIST - a suite of X-ray powder crystallography programs for use with a HP-65 calculator

    International Nuclear Information System (INIS)

    Ferguson, I.F.; Turek, M.

    1977-12-01

    Programs for X-ray powder crystallography are defined for use with a Hewlett Packard HP-65 (programmable) pocket calculator. These include the prediction of all Bragg reflections for defined P-, F-, I-cubic, tetragonal, hexagonal and orthorhombic cells; the calculation of the position of a specific Bragg reflection from defined unit cells with all symmetries except triclinic; interconversion of theta, 2theta, sin 2 theta and d, as well as the calculation of the Nelson-Riley function; the computation of crystal densities; the interconversion of rhombohedral and hexagonal unit cells, lsub(c) determinations for graphite, the calculation of a and c for boron carbide; and Miller index transformations between various unit cells. (author)

  14. Microdosimetry calculations for monoenergetic electrons using Geant4-DNA combined with a weighted track sampling algorithm.

    Science.gov (United States)

    Famulari, Gabriel; Pater, Piotr; Enger, Shirin A

    2017-07-07

    The aim of this study was to calculate microdosimetric distributions for low energy electrons simulated using the Monte Carlo track structure code Geant4-DNA. Tracks for monoenergetic electrons with kinetic energies ranging from 100 eV to 1 MeV were simulated in an infinite spherical water phantom using the Geant4-DNA extension included in Geant4 toolkit version 10.2 (patch 02). The microdosimetric distributions were obtained through random sampling of transfer points and overlaying scoring volumes within the associated volume of the tracks. Relative frequency distributions of energy deposition f(>E)/f(>0) and dose mean lineal energy ([Formula: see text]) values were calculated in nanometer-sized spherical and cylindrical targets. The effects of scoring volume and scoring techniques were examined. The results were compared with published data generated using MOCA8B and KURBUC. Geant4-DNA produces a lower frequency of higher energy deposits than MOCA8B. The [Formula: see text] values calculated with Geant4-DNA are smaller than those calculated using MOCA8B and KURBUC. The differences are mainly due to the lower ionization and excitation cross sections of Geant4-DNA for low energy electrons. To a lesser extent, discrepancies can also be attributed to the implementation in this study of a new and fast scoring technique that differs from that used in previous studies. For the same mean chord length ([Formula: see text]), the [Formula: see text] calculated in cylindrical volumes are larger than those calculated in spherical volumes. The discrepancies due to cross sections and scoring geometries increase with decreasing scoring site dimensions. A new set of [Formula: see text] values has been presented for monoenergetic electrons using a fast track sampling algorithm and the most recent physics models implemented in Geant4-DNA. This dataset can be combined with primary electron spectra to predict the radiation quality of photon and electron beams.

  15. EMSH program for calculating electron and photon transport through a matter at energies of 10 keV-1TeV

    International Nuclear Information System (INIS)

    Tayurskij, V.A.

    1989-01-01

    The EMSH program (Electro-Magnetic Shower) for the calculation of 1 keV-1 TeV electron and photon transport through a substance is described. The program is written in FORTRAN for the ES computers. Electron and positron bremsstrahlung, e - e - - and e + e - scattering, positron annihilation, production of e + e - -pairs by photons, photon. Compton scattering, photoelectric effect, photon Rayleigh scattering are taken into account during calculations. The cross sections of these processes are assigned with the 3-10% accuracy. Energy losses for atom ionization and excitation and bremsstrahlung, as well as multiple scattering are taken into account for charged particles. The program is used to calculate electron conversion into positrons, to estimate accelerator radiation background, to simulate electromagnetic showers in electromagntic calorimeters. 37 refs.; 15 figs

  16. BOKASUN: A fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams

    Science.gov (United States)

    Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore

    2009-03-01

    We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending

  17. MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness

    International Nuclear Information System (INIS)

    DeLorenzo, M; Wu, D; Rutel, I; Yang, K

    2015-01-01

    Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancy factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation

  18. MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness

    Energy Technology Data Exchange (ETDEWEB)

    DeLorenzo, M [Oklahoma University Health Sciences Center, Oklahoma City, OK (United States); Wu, D [University of Oklahoma Health Sciences Center, Oklahoma City, Ok (United States); Rutel, I [University of Oklahoma Health Science Center, Oklahoma City, OK (United States); Yang, K [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancy factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation

  19. BLOW.MOD2: program for a vessel depressurization calculation with the contribution of structures

    International Nuclear Information System (INIS)

    Doval, A.

    1990-01-01

    The BLOW.MOD2 program developed to calculate pressure vessels' depressurization is presented, considering heat contribution of the structures. The results are opposite to those obtained from other more complex numerical models, being the comparison extremely satisfactory. BLOW.MOD2 is a software of the 'Systems Sub-Branch', INVAP S.E. (Author) [es

  20. Computer program TMOC for calculating of pressure transients in fluid filled piping networks

    International Nuclear Information System (INIS)

    Siikonen, T.

    1978-01-01

    The propagation of a pressure wave in fluid filles tubes is significantly affected by the pipe wall motion and vice versa. A computer code TMOC (Transients by the Method of Characteristics) is being developed for the analysis of the coupled fluid and pipe wall transients. Because of the structural feedback, the pressure can be calculated more accurately than in the programs commonly used. (author)

  1. EML Surface Air Sampling Program, 1990--1993 data

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, R.J.; Sanderson, C.G.; Kada, J.

    1995-11-01

    Measurements of the concentrations of specific atmospheric radionuclides in air filter samples collected for the Environmental Measurements Laboratory`s Surface Air Sampling Program (SASP) during 1990--1993, with the exception of April 1993, indicate that anthropogenic radionuclides, in both hemispheres, were at or below the lower limits of detection for the sampling and analytical techniques that were used to collect and measure them. The occasional detection of {sup 137}Cs in some air filter samples may have resulted from resuspension of previously deposited debris. Following the April 6, 1993 accident and release of radionuclides into the atmosphere at a reprocessing plant in the Tomsk-7 military nuclear complex located 16 km north of the Siberian city of Tomsk, Russia, weekly air filter samples from Barrow, Alaska; Thule, Greenland and Moosonee, Canada were selected for special analyses. The naturally occurring radioisotopes that the authors measure, {sup 7}Be and {sup 210}Pb, continue to be detected in most air filter samples. Variations in the annual mean concentrations of {sup 7}Be at many of the sites appear to result primarily from changes in the atmospheric production rate of this cosmogenic radionuclide. Short-term variations in the concentrations of {sup 7}Be and {sup 210}Pb continued to be observed at many sites at which weekly air filter samples were analyzed. The monthly gross gamma-ray activity and the monthly mean surface air concentrations of {sup 7}Be, {sup 95}Zr, {sup 137}Cs, {sup 144}Ce, and {sup 210}Pb measured at sampling sites in SASP during 1990--1993 are presented. The weekly mean surface air concentrations of {sup 7}Be, {sup 95}Zr, {sup 137}Cs, {sup 144}Ce, and {sup 210}Pb for samples collected during 1990--1993 are given for 17 sites.

  2. Method Evaluations for Adsorption Free Energy Calculations at the Solid/Water Interface through Metadynamics, Umbrella Sampling, and Jarzynski's Equality.

    Science.gov (United States)

    Wei, Qichao; Zhao, Weilong; Yang, Yang; Cui, Beiliang; Xu, Zhijun; Yang, Xiaoning

    2018-03-19

    Considerable interest in characterizing protein/peptide-surface interactions has prompted extensive computational studies on calculations of adsorption free energy. However, in many cases, each individual study has focused on the application of free energy calculations to a specific system; therefore, it is difficult to combine the results into a general picture for choosing an appropriate strategy for the system of interest. Herein, three well-established computational algorithms are systemically compared and evaluated to compute the adsorption free energy of small molecules on two representative surfaces. The results clearly demonstrate that the characteristics of studied interfacial systems have crucial effects on the accuracy and efficiency of the adsorption free energy calculations. For the hydrophobic surface, steered molecular dynamics exhibits the highest efficiency, which appears to be a favorable method of choice for enhanced sampling simulations. However, for the charged surface, only the umbrella sampling method has the ability to accurately explore the adsorption free energy surface. The affinity of the water layer to the surface significantly affects the performance of free energy calculation methods, especially at the region close to the surface. Therefore, a general principle of how to discriminate between methodological and sampling issues based on the interfacial characteristics of the system under investigation is proposed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Calculation of the secondary gamma radiation by the Monte Carlo method at displaced sampling from distributed sources

    International Nuclear Information System (INIS)

    Petrov, Eh.E.; Fadeev, I.A.

    1979-01-01

    A possibility to use displaced sampling from a bulk gamma source in calculating the secondary gamma fields by the Monte Carlo method is discussed. The algorithm proposed is based on the concept of conjugate functions alongside the dispersion minimization technique. For the sake of simplicity a plane source is considered. The algorithm has been put into practice on the M-220 computer. The differential gamma current and flux spectra in 21cm-thick lead have been calculated. The source of secondary gamma-quanta was assumed to be a distributed, constant and isotropic one emitting 4 MeV gamma quanta with the rate of 10 9 quanta/cm 3 xs. The calculations have demonstrated that the last 7 cm of lead are responsible for the whole gamma spectral pattern. The spectra practically coincide with the ones calculated by the ROZ computer code. Thus the algorithm proposed can be offectively used in the calculations of secondary gamma radiation transport and reduces the computation time by 2-4 times

  4. INDRA: a program system for calculating the neutronics and photonics characteristics of a fusion reactor blanket

    International Nuclear Information System (INIS)

    Perry, R.T.; Gorenflo, H.; Daenner, W.

    1976-01-01

    INDRA is a program system for calculating the neutronics and photonics characteristics of fusion reactor blankets. It incorporates a total of 19 different codes and 5 large data libraries. 10 of the codes are available from the code distribution organizations. Some of them, however, have been slightly modified in order to permit a convenient transfer of information from one program module to the next. The remaining 9 programs have been prepared by the authors to complete the system with respect to flexibility and to facilitate the handling of the results. (orig./WBU) [de

  5. Emergency Doses (ED) - Revision 3: A calculator code for environmental dose computations

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1990-12-01

    The calculator program ED (Emergency Doses) was developed from several HP-41CV calculator programs documented in the report Seven Health Physics Calculator Programs for the HP-41CV, RHO-HS-ST-5P (Rittman 1984). The program was developed to enable estimates of offsite impacts more rapidly and reliably than was possible with the software available for emergency response at that time. The ED - Revision 3, documented in this report, revises the inhalation dose model to match that of ICRP 30, and adds the simple estimates for air concentration downwind from a chemical release. In addition, the method for calculating the Pasquill dispersion parameters was revised to match the GENII code within the limitations of a hand-held calculator (e.g., plume rise and building wake effects are not included). The summary report generator for printed output, which had been present in the code from the original version, was eliminated in Revision 3 to make room for the dispersion model, the chemical release portion, and the methods of looping back to an input menu until there is no further no change. This program runs on the Hewlett-Packard programmable calculators known as the HP-41CV and the HP-41CX. The documentation for ED - Revision 3 includes a guide for users, sample problems, detailed verification tests and results, model descriptions, code description (with program listing), and independent peer review. This software is intended to be used by individuals with some training in the use of air transport models. There are some user inputs that require intelligent application of the model to the actual conditions of the accident. The results calculated using ED - Revision 3 are only correct to the extent allowed by the mathematical models. 9 refs., 36 tabs

  6. Neutronic calculations for JET. Performed with the FURNACE2 program. (Final report JET contract JEO/9004)

    International Nuclear Information System (INIS)

    Verschuur, K.A.

    1996-10-01

    Neutron-transport calculations with the FURNACE(2) program system, in support of the Neutron Diagnostic Group at JET, have been performed since 1980, i.e. since the construction phase of JET. FURNACE(2) is a ray-tracing/multiple-reflection transport program system for toroidal geometries, that orginally was developed for blanket neutronics studies and which then was improved and extended for application to the neutron-diagnostics at JET. (orig./WL)

  7. Development of graph self-generating program of radiation sampling for geophysical prospecting with AutoLISP

    International Nuclear Information System (INIS)

    Zhou Hongsheng

    2009-01-01

    A program of self-generating graph of radiation sampling for geophysical prospecting is developed with AutoLISP, which is developed wholly by the author and can self-generate and explain sampling graphs. The program has largely increased drawing efficiency and can avoid the graph errors due to manual drawing. (authors)

  8. Calculation of Absorbed Glandular Dose using a FORTRAN Program Based on Monte Carlo X-ray Spectra in Mammography

    Directory of Open Access Journals (Sweden)

    Ali Asghar Mowlavi

    2011-03-01

    Full Text Available Introduction: Average glandular dose calculation in mammography with Mo-Rh target-filter and dose calculation for different situations is accurate and fast. Material and Methods: In this research, first of all, x-ray spectra of a Mo target bombarded by a 28 keV electron beam with and without a Rh filter were calculated using the MCNP code. Then, we used the Sobol-Wu parameters to write a FORTRAN code to calculate average glandular dose. Results: Average glandular dose variation was calculated against the voltage of the mammographic x-ray tube for d = 5 cm, HVL= 0.35 mm Al, and different value of g. Also, the results related to average glandular absorbed dose variation per unit roentgen radiation against the glandular fraction of breast tissue for kV = 28 and HVL = 0.400 mmAl and different values of d are presented. Finally, average glandular dose against d for g = 60% and three values of kV (23, 27, 35 kV with corresponding HVLs have been calculated. Discussion and Conclusion: The absorbed dose computational program is accurate, complete, fast and user friendly. This program can be used for optimization of exposure dose in mammography. Also, the results of this research are in good agreement with the computational results of others.

  9. Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge

    Science.gov (United States)

    Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.

    2016-01-01

    Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749

  10. SUBGR: A Program to Generate Subgroup Data for the Subgroup Resonance Self-Shielding Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-06-06

    The Subgroup Data Generation (SUBGR) program generates subgroup data, including levels and weights from the resonance self-shielded cross section table as a function of background cross section. Depending on the nuclide and the energy range, these subgroup data can be generated by (a) narrow resonance approximation, (b) pointwise flux calculations for homogeneous media; and (c) pointwise flux calculations for heterogeneous lattice cells. The latter two options are performed by the AMPX module IRFFACTOR. These subgroup data are to be used in the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronic simulator MPACT, for which the primary resonance self-shielding method is the subgroup method.

  11. ESTE AI (Annual Impacts) - the program for calculation of radiation doses caused by effluents in routine releases to the atmosphere and to the hydrosphere

    International Nuclear Information System (INIS)

    Carny, P.; Suchon, D.; Smejkalova, E.; Fabova, V.

    2009-01-01

    ESTE AI is a program for calculation of radiation doses caused by effluents in routine releases to the atmosphere and to the hydrosphere. Doses to the members of critical groups of inhabitants in the vicinity of NPP are calculated and as a result, critical group is determined. The program enables to calculate collective doses as well. Collective doses to the inhabitants living in the vicinity of the NPP are calculated. ESTE AI calculates doses to the whole population of Slovakia from the effluents of the specific plant. In this calculation, global nuclides are included and assumed, as well. The program enables to calculate and to document beyond-border radiological impacts of effluents caused by routine operation of NPP. ESTE AI was approved by the 'Public Health Authority of the Slovak Republic' and is used as legal instrument by Slovenske elektrarne a.s., NPP Bohunice. (authors)

  12. ESTE AI (Annual Impacts) - the program for calculation of radiation doses caused by effluents in routine releases to the atmosphere and to the hydrosphere

    International Nuclear Information System (INIS)

    Carny, P.; Suchon, D.; Smejkalova, E.; Fabova, V.

    2008-01-01

    ESTE AI is a program for calculation of radiation doses caused by effluents in routine releases to the atmosphere and to the hydrosphere. Doses to the members of critical groups of inhabitants in the vicinity of NPP are calculated and as a result, critical group is determined. The program enables to calculate collective doses as well. Collective doses to the inhabitants living in the vicinity of the NPP are calculated. ESTE AI calculates doses to the whole population of Slovakia from the effluents of the specific plant. In this calculation, global nuclides are included and assumed, as well. The program enables to calculate and to document beyond-border radiological impacts of effluents caused by routine operation of NPP. ESTE AI was approved by the 'Public Health Authority of the Slovak Republic' and is used as legal instrument by Slovenske elektrarne a.s., NPP Bohunice. (authors)

  13. The integrated performance evaluation program quality assurance guidance in support of EM environmental sampling and analysis activities

    International Nuclear Information System (INIS)

    1994-05-01

    EM's (DOE's Environmental Restoration and Waste Management) Integrated Performance Evaluation Program (IPEP) has the purpose of integrating information from existing PE programs with expanded QA activities to develop information about the quality of radiological, mixed waste, and hazardous environmental sample analyses provided by all laboratories supporting EM programs. The guidance addresses the goals of identifying specific PE sample programs and contacts, identifying specific requirements for participation in DOE's internal and external (regulatory) programs, identifying key issues relating to application and interpretation of PE materials for EM headquarters and field office managers, and providing technical guidance covering PE materials for site-specific activities. (PE) Performance Evaluation materials or samples are necessary for the quality assurance/control programs covering environmental data collection

  14. NASA Lunar Sample Education Disk Program - Space Rocks for Classrooms, Museums, Science Centers and Libraries

    Science.gov (United States)

    Allen, J. S.

    2009-12-01

    NASA is eager for students and the public to experience lunar Apollo rocks and regolith soils first hand. Lunar samples embedded in plastic are available for educators to use in their classrooms, museums, science centers, and public libraries for education activities and display. The sample education disks are valuable tools for engaging students in the exploration of the Solar System. Scientific research conducted on the Apollo rocks has revealed the early history of our Earth-Moon system. The rocks help educators make the connections to this ancient history of our planet as well as connections to the basic lunar surface processes - impact and volcanism. With these samples educators in museums, science centers, libraries, and classrooms can help students and the public understand the key questions pursued by missions to Moon. The Office of the Curator at Johnson Space Center is in the process of reorganizing and renewing the Lunar and Meteorite Sample Education Disk Program to increase reach, security and accountability. The new program expands the reach of these exciting extraterrestrial rocks through increased access to training and educator borrowing. One of the expanded opportunities is that trained certified educators from science centers, museums, and libraries may now borrow the extraterrestrial rock samples. Previously the loan program was only open to classroom educators so the expansion will increase the public access to the samples and allow educators to make the critical connections of the rocks to the exciting exploration missions taking place in our solar system. Each Lunar Disk contains three lunar rocks and three regolith soils embedded in Lucite. The anorthosite sample is a part of the magma ocean formed on the surface of Moon in the early melting period, the basalt is part of the extensive lunar mare lava flows, and the breccias sample is an important example of the violent impact history of the Moon. The disks also include two regolith soils and

  15. Scinfi, a program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1984-01-01

    A code, Scinfi, was developed, written in Basic, to compute the efficiency-quench standardization curve for any radionuclide. The program requires the standardization curve for 3 H and the polynomial relations between counting efficiency and figure of merit for both 3 H and the problem (e.g. 14 C). The program is applied to the computation of the efficiency-quench standardization curve for 14 C. Five different liquid scintillation spectrometers and two scintillator solutions have been checked. The computation results are compared with the experimental values obtained with a set of 14 C standardized samples. (author)

  16. SCINFI, a program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1984-01-01

    A code, SCINFI, was developed, written in BASIC, to compute the efficiency- quench standardization curve for any radionuclide. The program requires the standardization curve for 3H and the polynomial relations between counting efficiency and figure of merit for both 3H and the problem (e.g. 14 C ). The program is applied to the computation of the efficiency-quench standardization curve for 14 c . Five different liquid scintillation spectrometers and two scintillator solutions have bean checked. The computation results are compared with the experimental values obtained with a set of 14 c standardized samples. (Author)

  17. A Monte Carlo program to calculate the exposure rate from airborne radioactive gases inside a nuclear reactor containment building.

    Science.gov (United States)

    Sherbini, S; Tamasanis, D; Sykes, J; Porter, S W

    1986-12-01

    A program was developed to calculate the exposure rate resulting from airborne gases inside a reactor containment building. The calculations were performed at the location of a wall-mounted area radiation monitor. The program uses Monte Carlo techniques and accounts for both the direct and scattered components of the radiation field at the detector. The scattered component was found to contribute about 30% of the total exposure rate at 50 keV and dropped to about 7% at 2000 keV. The results of the calculations were normalized to unit activity per unit volume of air in the containment. This allows the exposure rate readings of the area monitor to be used to estimate the airborne activity in containment in the early phases of an accident. Such estimates, coupled with containment leak rates, provide a method to obtain a release rate for use in offsite dose projection calculations.

  18. CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo

    International Nuclear Information System (INIS)

    Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.

    2006-01-01

    The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)

  19. Electric field gradient calculation at atomic site of In implanted ZnO samples

    International Nuclear Information System (INIS)

    Abreu, Y.; Cruz, C. M.; Leyva, A.; Pinnera; Van Espen, P.; Perez, C.

    2011-01-01

    The electric field gradient (EFG) calculated for 111 In→ 111 Cd implanted ZnO samples is reported. The study was made for ideal hexagonal ZnO structures and super-cells considering the In implantation environment at the cation site using the 'WIEN2k' code within the GGA(+U) approximation. The obtained EFG values are in good agreement with the experimental reports for ideal ZnO and 111 In→ 111 Cd implanted structures; measured by perturbed angular correlation (PAC) and Moessbauer spectroscopy. The attribution of substitutional incorporation of 111 In at the ZnO cation site after annealing was confirmed. (Author)

  20. ACRO - a computer program for calculating organ doses from acute or chronic inhalation and ingestion of radionuclides

    International Nuclear Information System (INIS)

    Hirayama, Akio; Kishimoto, Yoichiro; Shinohara, Kunihiko.

    1978-01-01

    The computer program ACRO has been developed to calculate organ doses from acute or chronic inhalation and ingestion of radionuclides. The ICRP Task Group Lung Model (TGLM) was used for inhalation model, and a simple one-compartment model for ingestion. This program is written in FORTRAN IV, and can be executed with storage requirements of about 260 K bytes. (auth.)

  1. Preliminary Calculation of the Indicators of Sustainable Development for National Radioactive Waste Management Programs

    International Nuclear Information System (INIS)

    Cheong, Jae Hak; Park, Won Jae

    2003-01-01

    As a follow up to the Agenda 21's policy statement for safe management of radioactive waste adopted at Rio Conference held in 1992, the UN invited the IAEA to develop and implement indicators of sustainable development for the management of radioactive waste. The IAEA finalized the indicators in 2002, and is planning to calculate the member states' values of indicators in connection with operation of its Net-Enabled Waste Management Database system. In this paper, the basis for introducing the indicators into the radioactive waste management was analyzed, and calculation methodology and standard assessment procedure were simply depicted. In addition, a series of innate limitations in calculation and comparison of the indicators was analyzed. According to the proposed standard procedure, the indicators for a few major countries including Korea were calculated and compared, by use of each country's radioactive waste management framework and its practices. In addition, a series of measures increasing the values of the indicators was derived so as to enhance the sustainability of domestic radioactive waste management program.

  2. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  3. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  5. ASYMPT - a program to calculate asymptotics of hyperspherical potential curves and adiabatic potentials

    International Nuclear Information System (INIS)

    Abrashkevich, A.G.; Puzynin, I.V.; Vinitskij, S.I.

    1997-01-01

    A FORTRAN 77 program is presented which calculates asymptotics of potential curves and adiabatic potentials with an accuracy of O(ρ -2 ) in the framework of the hyperspherical adiabatic (HSA) approach. It is shown that matrix elements of the equivalent operator corresponding to the perturbation ρ -2 have a simple form in the basis of the Coulomb parabolic functions in the body-fixed frame and can be easily computed for high values of total orbital momentum and threshold number. The second-order corrections to the adiabatic curves are obtained as the solutions of the corresponding secular equation. The asymptotic potentials obtained can be used for the calculation of the energy levels and radial wave functions of two-electron systems in the adiabatic and coupled-channel approximations of the HSA approach

  6. Analytical Chemistry Division's sample transaction system

    International Nuclear Information System (INIS)

    Stanton, J.S.; Tilson, P.A.

    1980-10-01

    The Analytical Chemistry Division uses the DECsystem-10 computer for a wide range of tasks: sample management, timekeeping, quality assurance, and data calculation. This document describes the features and operating characteristics of many of the computer programs used by the Division. The descriptions are divided into chapters which cover all of the information about one aspect of the Analytical Chemistry Division's computer processing

  7. Development of Bi-phase sodium-oxygen-hydrogen chemical equilibrium calculation program (BISHOP) using Gibbs free energy minimization method

    International Nuclear Information System (INIS)

    Okano, Yasushi

    1999-08-01

    In order to analyze the reaction heat and compounds due to sodium combustion, the multiphase chemical equilibrium calculation program for chemical reaction among sodium, oxygen and hydrogen is developed in this study. The developed numerical program is named BISHOP; which denotes Bi-Phase, Sodium - Oxygen - Hydrogen, Chemical Equilibrium Calculation Program'. Gibbs free energy minimization method is used because of the special merits that easily add and change chemical species, and generally deal many thermochemical reaction systems in addition to constant temperature and pressure one. Three new methods are developed for solving multi-phase sodium reaction system in this study. One is to construct equation system by simplifying phase, and the other is to expand the Gibbs free energy minimization method into multi-phase system, and the last is to establish the effective searching method for the minimum value. Chemical compounds by the combustion of sodium in the air are calculated using BISHOP. The Calculated temperature and moisture conditions where sodium-oxide and hydroxide are formed qualitatively agree with the experiments. Deformation of sodium hydride is calculated by the program. The estimated result of the relationship between the deformation temperature and pressure closely agree with the well known experimental equation of Roy and Rodgers. It is concluded that BISHOP can be used for evaluated the combustion and deformation behaviors of sodium and its compounds. Hydrogen formation condition of the dump-tank room at the sodium leak event of FBR is quantitatively evaluated by BISHOP. It can be concluded that to keep the temperature of dump-tank room lower is effective method to suppress the formation of hydrogen. In case of choosing the lower inflammability limit of 4.1 mol% as the hydrogen concentration criterion, formation reaction of sodium hydride from sodium and hydrogen is facilitated below the room temperature of 800 K, and concentration of hydrogen

  8. Brine Sampling and Evaluation Program, 1991 report

    Energy Technology Data Exchange (ETDEWEB)

    Deal, D.E.; Abitz, R.J.; Myers, J.; Martin, M.L.; Milligan, D.J.; Sobocinski, R.W.; Lipponer, P.P.J. [International Technology Corp., Albuquerque, NM (United States); Belski, D.S. [Westinghouse Electric Corp., Carlsbad, NM (United States). Waste Isolation Div.

    1993-09-01

    The data presented in this report are the result of Brine Sampling and Evaluation Program (BSEP) activities at the Waste Isolation Pilot Plan (WIPP) during 1991. These BSEP activities document and investigate the origins, hydraulic characteristics, extent, and composition of brine occurrences in the Permian Salado Formation and seepage of that brine into the excavations at the WIPP. When excavations began at the WIPP in 1982, small brine seepages (weeps) were observed on the walls. Brine studies began as part of the Site Validation Program and were formalized as a program in its own right in 1985. During nine years of observations (1982--1991), evidence has mounted that the amount of brine seeping into the WIPP excavations is limited, local, and only a small fraction of that required to produce hydrogen gas by corroding the metal in the waste drums and waste inventory. The data through 1990 is discussed in detail and summarized by Deal and others (1991). The data presented in this report describes progress made during the calendar year 1991 and focuses on four major areas: (1) quantification of the amount of brine seeping across vertical surfaces in the WIPP excavations (brine ``weeps); (2) monitoring of brine inflow, e.g., measuring brines recovered from holes drilled downward from the underground drifts (downholes), upward from the underground drifts (upholes), and from subhorizontal holes; (3) further characterization of brine geochemistry; and (4) preliminary quantification of the amount of brine that might be released by squeezing the underconsolidated clays present in the Salado Formation.

  9. Software documentation and user's manual for fish-impingement sampling design and estimation method computer programs

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-11-01

    This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used

  10. Factors affecting volume calculation with single photon emission tomography (SPECT) method

    International Nuclear Information System (INIS)

    Liu, T.H.; Lee, K.H.; Chen, D.C.P.; Ballard, S.; Siegel, M.E.

    1985-01-01

    Several factors may influence the calculation of absolute volumes (VL) from SPECT images. The effect of these factors must be established to optimize the technique. The authors investigated the following on the VL calculations: % of background (BG) subtraction, reconstruction filters, sample activity, angular sampling and edge detection methods. Transaxial images of a liver-trunk phantom filled with Tc-99m from 1 to 3 μCi/cc were obtained in 64x64 matrix with a Siemens Rota Camera and MDS computer. Different reconstruction filters including Hanning 20,32, 64 and Butterworth 20, 32 were used. Angular samplings were performed in 3 and 6 degree increments. ROI's were drawn manually and with an automatic edge detection program around the image after BG subtraction. VL's were calculated by multiplying the number of pixels within the ROI by the slice thickness and the x- and y- calibrations of each pixel. One or 2 pixel per slice thickness was applied in the calculation. An inverse correlation was found between the calculated VL and the % of BG subtraction (r=0.99 for 1,2,3 μCi/cc activity). Based on the authors' linear regression analysis, the correct liver VL was measured with about 53% BG subtraction. The reconstruction filters, slice thickness and angular sampling had only minor effects on the calculated phantom volumes. Detection of the ROI automatically by the computer was not as accurate as the manual method. The authors conclude that the % of BG subtraction appears to be the most important factor affecting the VL calculation. With good quality control and appropriate reconstruction factors, correct VL calculations can be achieved with SPECT

  11. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J.

    2007-03-01

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future

  12. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J

    2007-03-15

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future.

  13. Daylight calculations in practice

    DEFF Research Database (Denmark)

    Iversen, Anne; Roy, Nicolas; Hvass, Mette

    The aim of the project was to obtain a better understanding of what daylight calculations show and also to gain knowledge of how the different daylight simulation programs perform compared with each other. Experience has shown that results for the same room, obtained from two daylight simulation...... programs can give different results. This can be due to restrictions in the program itself and/or be due to the skills of the persons setting up the models. This is crucial as daylight calculations are used to document that the demands and recommendations to daylight levels outlined by building authorities....... The aim of the project was to obtain a better understanding of what daylight calculations show and also to gain knowledge of how the different daylight simulation programs perform compared with each other. Furthermore the aim was to provide knowledge of how to build up the 3D models that were...

  14. Burn-Up Calculation of the Fuel Element in RSG-GAS Reactor using Program Package BATAN-FUEL

    International Nuclear Information System (INIS)

    Mochamad Imron; Ariyawan Sunardi

    2012-01-01

    Calculation of burn lip distribution of 2.96 gr U/cc Silicide fuel element at the 78 th reactor cycle using computer code program of BATAN-FUEL has been done. This calculation uses inputs such as generated power, operation time and a core assumption model of 5/1. Using this calculation model burn up for the entire fuel elements at the reactor core are able to be calculated. From the calculation it is obtained that the minimum burn up of 6.82% is RI-50 at the position of A-9, while the maximum burn up of 57.57% is RI 467 at the position of 8-7. Based on the safety criteria as specified in the Safety Analysis Report (SAR) RSG-GAS reactor, the maximum fuel burn up allowed is 59.59%. It then can be concluded that pattern that elements placement at the reactor core are properly and optimally done. (author)

  15. AqSo_NaCl: Computer program to calculate p-T-V-x properties in the H2O-NaCl fluid system applied to fluid inclusion research and pore fluid calculation

    Science.gov (United States)

    Bakker, Ronald J.

    2018-06-01

    The program AqSo_NaCl has been developed to calculate pressure - molar volume - temperature - composition (p-V-T-x) properties, enthalpy, and heat capacity of the binary H2O-NaCl system. The algorithms are designed in BASIC within the Xojo programming environment, and can be operated as stand-alone project with Macintosh-, Windows-, and Unix-based operating systems. A series of ten self-instructive interfaces (modules) are developed to calculate fluid inclusion properties and pore fluid properties. The modules may be used to calculate properties of pure NaCl, the halite-liquidus, the halite-vapourus, dew-point and bubble-point curves (liquid-vapour), critical point, and SLV solid-liquid-vapour curves at temperatures above 0.1 °C (with halite) and below 0.1 °C (with ice or hydrohalite). Isochores of homogeneous fluids and unmixed fluids in a closed system can be calculated and exported to a.txt file. Isochores calculated for fluid inclusions can be corrected according to the volumetric properties of quartz. Microthermometric data, i.e. dissolution temperatures and homogenization temperatures, can be used to calculated bulk fluid properties of fluid inclusions. Alternatively, in the absence of total homogenization temperature the volume fraction of the liquid phase in fluid inclusions can be used to obtain bulk properties.

  16. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  17. XRAY applied program package for calculation of electron-photon fields in the energy range of 1-1000 keV

    International Nuclear Information System (INIS)

    Lappa, A.V.; Khadyeva, Z.M.; Burmistrov, D.S.; Vasil'ev, O.N.

    1990-01-01

    The package of applied XRAY programs is intended for calculating the linear and fluctuation characteristics of photon and electron radiation fields in heterogeneous medium within 1-1000 keV energy range. The XRAY program package consists of moduli written in FORTRAN-IV and data files. 9 refs

  18. Vectorization and parallelization of Monte-Carlo programs for calculation of radiation transport

    International Nuclear Information System (INIS)

    Seidel, R.

    1995-01-01

    The versatile MCNP-3B Monte-Carlo code written in FORTRAN77, for simulation of the radiation transport of neutral particles, has been subjected to vectorization and parallelization of essential parts, without touching its versatility. Vectorization is not dependent on a specific computer. Several sample tasks have been selected in order to test the vectorized MCNP-3B code in comparison to the scalar MNCP-3B code. The samples are a representative example of the 3-D calculations to be performed for simulation of radiation transport in neutron and reactor physics. (1) 4πneutron detector. (2) High-energy calorimeter. (3) PROTEUS benchmark (conversion rates and neutron multiplication factors for the HCLWR (High Conversion Light Water Reactor)). (orig./HP) [de

  19. VMD-SS: A graphical user interface plug-in to calculate the protein secondary structure in VMD program.

    Science.gov (United States)

    Yahyavi, Masoumeh; Falsafi-Zadeh, Sajad; Karimi, Zahra; Kalatarian, Giti; Galehdari, Hamid

    2014-01-01

    The investigation on the types of secondary structure (SS) of a protein is important. The evolution of secondary structures during molecular dynamics simulations is a useful parameter to analyze protein structures. Therefore, it is of interest to describe VMD-SS (a software program) for the identification of secondary structure elements and its trajectories during simulation for known structures available at the Protein Data Bank (PDB). The program helps to calculate (1) percentage SS, (2) SS occurrence in each residue, (3) percentage SS during simulation, and (4) percentage residues in all SS types during simulation. The VMD-SS plug-in was designed using TCL script and stride to calculate secondary structure features. The database is available for free at http://science.scu.ac.ir/HomePage.aspx?TabID=13755.

  20. Study on the Application of the Combination of TMD Simulation and Umbrella Sampling in PMF Calculation for Molecular Conformational Transitions

    Directory of Open Access Journals (Sweden)

    Qing Wang

    2016-05-01

    Full Text Available Free energy calculations of the potential of mean force (PMF based on the combination of targeted molecular dynamics (TMD simulations and umbrella samplings as a function of physical coordinates have been applied to explore the detailed pathways and the corresponding free energy profiles for the conformational transition processes of the butane molecule and the 35-residue villin headpiece subdomain (HP35. The accurate PMF profiles for describing the dihedral rotation of butane under both coordinates of dihedral rotation and root mean square deviation (RMSD variation were obtained based on the different umbrella samplings from the same TMD simulations. The initial structures for the umbrella samplings can be conveniently selected from the TMD trajectories. For the application of this computational method in the unfolding process of the HP35 protein, the PMF calculation along with the coordinate of the radius of gyration (Rg presents the gradual increase of free energies by about 1 kcal/mol with the energy fluctuations. The feature of conformational transition for the unfolding process of the HP35 protein shows that the spherical structure extends and the middle α-helix unfolds firstly, followed by the unfolding of other α-helices. The computational method for the PMF calculations based on the combination of TMD simulations and umbrella samplings provided a valuable strategy in investigating detailed conformational transition pathways for other allosteric processes.

  1. [Development and effectiveness of a drug dosage calculation training program using cognitive loading theory based on smartphone application].

    Science.gov (United States)

    Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon

    2012-10-01

    This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, psmartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.

  2. EBRPOCO - a program to calculate detailed contributions of power reactivity components of EBR-II

    International Nuclear Information System (INIS)

    Meneghetti, D.; Kucera, D.A.

    1981-01-01

    The EBRPOCO program has been developed to facilitate the calculations of the power coefficients of reactivity of EBR-II loadings. The program enables contributions of various components of the power coefficient to be delineated axially for every subassembly. The program computes the reactivity contributions of the power coefficients resulting from: density reduction of sodium coolant due to temperature; displacement of sodium coolant by thermal expansions of cladding, structural rods, subassembly cans, and lower and upper axial reflectors; density reductions of these steel components due to temperature; displacement of bond-sodium (if present) in gaps by differential thermal expansions of fuel and cladding; density reduction of bond-sodium (if present) in gaps due to temperature; free axial expansion of fuel if unrestricted by cladding or restricted axial expansion of fuel determined by axial expansion of cladding. Isotopic spatial contributions to the Doppler component my also be obtained. (orig.) [de

  3. Brine Sampling and Evaluation Program, 1990 report

    Energy Technology Data Exchange (ETDEWEB)

    Deal, D.E.; Abitz, R.J.; Myers, J.; Case, J.B.; Martin, M.L.; Roggenthen, W.M. [International Technology Corp., Albuquerque, NM (United States); Belski, D.S. [Westinghouse Electric Corp., Carlsbad, NM (United States). Waste Isolation Div.

    1991-08-01

    The data presented in this report are the result of Brine Sampling and Evaluation Program (BSEP) activities at the Waste Isolation Pilot Plant (WIPP) during 1990. When excavations began in 1982, small brine seepages (weeps) were observed on the walls. These brine occurrences were initially described as part of the Site Validation Program. Brine studies were formalized in 1985. The BSEP activities document and investigate the origins, hydraulic characteristics, extent, and composition of brine occurrences in the Permian Salado Formation and seepage of that brine into the excavations at the WIPP. The brine chemistry is important because it assists in understanding the origin of the brine and because it may affect possible chemical reactions in the buried waste after sealing the repository. The volume of brine and the hydrologic system that drives the brine seepage also need to be understood to assess the long-term performance of the repository. After more than eight years of observations (1982--1990), no credible evidence exists to indicate that enough naturally occurring brine will seep into the WIPP excavations to be of practical concern. The detailed observations and analyses summarized herein and in previous BSEP reports confirm the evidence apparent during casual visits to the underground workings -- that the excavations are remarkably dry.

  4. Brine Sampling and Evaluation Program, 1990 report

    International Nuclear Information System (INIS)

    Deal, D.E.; Abitz, R.J.; Myers, J.; Case, J.B.; Martin, M.L.; Roggenthen, W.M.; Belski, D.S.

    1991-08-01

    The data presented in this report are the result of Brine Sampling and Evaluation Program (BSEP) activities at the Waste Isolation Pilot Plant (WIPP) during 1990. When excavations began in 1982, small brine seepages (weeps) were observed on the walls. These brine occurrences were initially described as part of the Site Validation Program. Brine studies were formalized in 1985. The BSEP activities document and investigate the origins, hydraulic characteristics, extent, and composition of brine occurrences in the Permian Salado Formation and seepage of that brine into the excavations at the WIPP. The brine chemistry is important because it assists in understanding the origin of the brine and because it may affect possible chemical reactions in the buried waste after sealing the repository. The volume of brine and the hydrologic system that drives the brine seepage also need to be understood to assess the long-term performance of the repository. After more than eight years of observations (1982--1990), no credible evidence exists to indicate that enough naturally occurring brine will seep into the WIPP excavations to be of practical concern. The detailed observations and analyses summarized herein and in previous BSEP reports confirm the evidence apparent during casual visits to the underground workings -- that the excavations are remarkably dry

  5. Empirical equations of the solvent extraction of the energetic inputs, uranium and plutonium, calculated by using the program Microsoft Excel

    International Nuclear Information System (INIS)

    Bento, Dercio Lopes

    2006-01-01

    PUREX is one of the purification process for irradiated nuclear fuel. In the flowchart the program uses various uranium and plutonium extraction phases by using organic solvent contained in the aqueous phase obtained in the dissolution of the fuel element. A posterior extraction U and Pu are changed to the aqueous phase. So it is fundamental to know the distribution coefficient (dS), at the temperature (tc), of the substances among the two immiscible phases, for better calculation the suitable flowchart. A mathematical model was elaborated based on experimental data, for the calculation of the dS and applied to a referential band of substance concentrations in the aqueous phase (xS) and organic (yS). By using the program Excel, we personalized the empirical equations calculated by the root mean square. The relative deviation, among the calculated values and the experimental ones are the standards

  6. Program TOTELA calculating basic cross sections in intermediate energy region by using systematics

    International Nuclear Information System (INIS)

    Fukahori, Tokio; Niita, Koji

    2000-01-01

    Program TOTELA can calculate neutron- and proton-induced total, elastic scattering and reaction cross sections and angular distribution of elastic scattering in the intermediate energy region from 20 MeV to 3 GeV. The TOTELA adopts the systematics modified from that by Pearlstein to reproduce the experimental data and LA150 evaluation better. The calculated results compared with experimental data and LA150 evaluation are shown in figures. The TOTELA results can reproduce those data almost well. The TOTELA was developed to fill the lack of experimental data of above quantities in the intermediate energy region and to use for production of JENDL High Energy File. In the case that there is no experimental data of above quantities, the optical model parameters can be fitted by using TOTELA results. From this point of view, it is also useful to compare the optical model calculation by using RIPL with TOTELA results, in order to verify the parameter quality. Input data of TOTELA is only atomic and mass numbers of incident particle and target nuclide and input/output file names. The output of TOTELA calculation is in ENDF-6 format used in the intermediate energy nuclear data files. It is easy to modify the main routine by users. Details are written in each subroutine and main routine

  7. Forward flux sampling calculation of homogeneous nucleation rates from aqueous NaCl solutions.

    Science.gov (United States)

    Jiang, Hao; Haji-Akbari, Amir; Debenedetti, Pablo G; Panagiotopoulos, Athanassios Z

    2018-01-28

    We used molecular dynamics simulations and the path sampling technique known as forward flux sampling to study homogeneous nucleation of NaCl crystals from supersaturated aqueous solutions at 298 K and 1 bar. Nucleation rates were obtained for a range of salt concentrations for the Joung-Cheatham NaCl force field combined with the Extended Simple Point Charge (SPC/E) water model. The calculated nucleation rates are significantly lower than the available experimental measurements. The estimates for the nucleation rates in this work do not rely on classical nucleation theory, but the pathways observed in the simulations suggest that the nucleation process is better described by classical nucleation theory than an alternative interpretation based on Ostwald's step rule, in contrast to some prior simulations of related models. In addition to the size of NaCl nucleus, we find that the crystallinity of a nascent cluster plays an important role in the nucleation process. Nuclei with high crystallinity were found to have higher growth probability and longer lifetimes, possibly because they are less exposed to hydration water.

  8. DIFMIG - A computer program for calculation of diffusive migration through multi-barrier systems

    International Nuclear Information System (INIS)

    Bo, P.; Carlsen, L.

    1981-11-01

    The FORTRAN IV program DIFMIG calculates one-dimensionally (i.e. column) the diffusive migration of single substances through arbitrary multibarrier systems. Time dependent changes in concentration other than dispersion/diffusion (e.g. slow dissolution of a compound from a repository, radioactive decay, and/or build up of daughter products), and possible time dependent variations in the effective dispersion into account. The diffusion equation is solved by a finite difference implicite method, the resulting trigonal matrix equation being solved by standard methods. (author)

  9. MACK-IV, a new version of MACK: a program to calculate nuclear response functions from data in ENDF/B format

    International Nuclear Information System (INIS)

    Abdou, M.A.; Gohar, Y.; Wright, R.Q.

    1978-07-01

    MACK-IV calculates nuclear response functions important to the neutronics analysis of nuclear and fusion systems. A central part of the code deals with the calculation of the nuclear response function for nuclear heating more commonly known as the kerma factor. Pointwise and multigroup neutron kerma factors, individual reactions, helium, hydrogen, and tritium production response functions are calculated from any basic nuclear data library in ENDF/B format. The program processes all reactions in the energy range of 0 to 20 MeV for fissionable and nonfissionable materials. The program also calculates the gamma production cross sections and the gamma production energy matrix. A built-in computational capability permits the code to calculate the cross sections in the resolved and unresolved resonance regions from resonance parameters in ENDF/B with an option for Doppler broadening. All energy pointwise and multigroup data calculated by the code can be punched, printed and/or written on tape files. Multigroup response functions (e.g., kerma factors, reaction cross sections, gas production, atomic displacements, etc.) can be outputted in the format of MACK-ACTIVITY-Table suitable for direct use with current neutron (and photon) transport codes

  10. Program GROUPIE (version 79-1): calculation of Bondarenko self-shielded neutron cross sections and multiband parameters from data in the ENDF/B format

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1980-01-01

    Program GROUPIE reads evaluated data in the ENDF/B format and uses these data to calculate Bondarenko self-shielded cross sections and multiband parameters. To give as much generality as possible, the program allows the user to specify arbitrary energy groups and an arbitrary energy groups and an arbitrary energy-dependent neutron spectrum (weighing function). To guarantee the accuracy of the results, all integrals are performed analytically; in no case is iteration or any approximate form of integration used. The output from this program includes both listings and multiband parameters suitable for use either in a normal multigroup transport calculation or in a multiband transport calculation. A listing of the source deck is available on request

  11. Operational air sampling report, July--December 1991

    International Nuclear Information System (INIS)

    Lyons, C.L.

    1992-11-01

    Air sampling is one of the more useful ways of assessing the effectiveness of operational radiation safety programs at the Nevada Test Site (NTS). Air sampling programs document NTS airborne radionuclide concentrations in various work locations and environments. These concentrations generally remain well below the Derived Air Concentration (DAC) values prescribed by the Department of Energy (DOE 5480.11, Attachment 1) or the Derived Concentration Guide (DCG) values prescribed by the Department of Energy DOE 5400.5, Chapter Ill. The Defense Nuclear Agency (DNA) tunnel complexes, Area 12 Test Support Compound and the Area 6 Decontamination Pad and Laundry air sampling programs are summarized in this report. Evaluations are based on Analytical Services Department (ASD) Counting Laboratory analyses and Health Protection Department (HPD)/Radiological Field Operations Section (RFOS) radiation protection technician's (RPT) or health physicists' calculations for air samples collected July 1 through December 31, 1991. Of the NTS operational air sampling programs in the tunnel complexes, the initial mining and event reentry and recovery operations represent the only real airborne radioactive inhalation potentials to personnel. Monthly filter and scintillation cell samples were taken and counted in RDA-200 Radon Detectors to document working levels of radon/thoron daughters and picocurie/liter (PCVL) concentrations of radon gas. Weekly Drierite samples for tritium analysis were taken in the active tunnel complexes to document any changes in normal background levels or reentry drifts as they are advanced toward ground zero (GZ) areas. Underground water sources are considered primary transporters of tritium from old event areas

  12. Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations

    Directory of Open Access Journals (Sweden)

    Guillermo Macbeth

    2011-05-01

    Full Text Available The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calculator software that performs such analysis and offers some interpretation tips. Differences and similarities with the parametric case are analysed and illustrated. The implementation of this free program is fully described and compared with other calculators. Alternative algorithmic approaches are mathematically analysed and a basic linear algebra proof of its equivalence is formally presented. Two worked examples in cognitive psychology are commented. A visual interpretation of Cliff´s Delta is suggested. Availability, installation and applications of the program are presented and discussed.

  13. Model for incorporating fuel swelling and clad shrinkage effects in diffusion theory calculations (LWBR Development Program)

    International Nuclear Information System (INIS)

    Schick, W.C. Jr.; Milani, S.; Duncombe, E.

    1980-03-01

    A model has been devised for incorporating into the thermal feedback procedure of the PDQ few-group diffusion theory computer program the explicit calculation of depletion and temperature dependent fuel-rod shrinkage and swelling at each mesh point. The model determines the effect on reactivity of the change in hydrogen concentration caused by the variation in coolant channel area as the rods contract and expand. The calculation of fuel temperature, and hence of Doppler-broadened cross sections, is improved by correcting the heat transfer coefficient of the fuel-clad gap for the effects of clad creep, fuel densification and swelling, and release of fission-product gases into the gap. An approximate calculation of clad stress is also included in the model

  14. The Euler’s Graphical User Interface Spreadsheet Calculator for Solving Ordinary Differential Equations by Visual Basic for Application Programming

    Science.gov (United States)

    Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila

    2017-08-01

    In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.

  15. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  16. Comparison of RESRAD with hand calculations

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1995-09-01

    This report is a continuation of an earlier comparison done with two other computer programs, GENII and PATHRAE. The dose calculations by the two programs were compared with each other and with hand calculations. These band calculations have now been compared with RESRAD Version 5.41 to examine the use of standard models and parameters in this computer program. The hand calculations disclosed a significant computational error in RESRAD. The Pu-241 ingestion doses are five orders of magnitude too small. In addition, the external doses from some nuclides differ greatly from expected values. Both of these deficiencies have been corrected in later versions of RESRAD

  17. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  18. Development of heat transfer calculation program for finned-tune heat exchanger of multi-burner boiler

    International Nuclear Information System (INIS)

    Jang, Sae Byul; Kim, Jong Jin; Ahn, Joon

    2009-01-01

    We develop a heat exchanger modules for a multi-burner boiler. The heat exchanger module is kind of a Heat Recovery Steam Generator (HRSG). This heat recovery system has 8 heat exchanger modules. The 1st module consists of 27 bare tubes due to high temperature exhaust gas and the others consist of 27 finned tubes. The maximum steam pressure of each module is 1 MPa and tested steam pressure is 0.7 MPa. In order to test these heat exchanger modules, we make a 0.5 t/h flue tube boiler (LNG, 40 Nm 3 /h). We tested the heat exchanger module with changing the position of each heat exchanger module. We measured the inlet and outlet temperature of each heat exchanger module and calculated the heat exchange rate. Based on test results, we develop a heat transfer calculation program to predict flue gas. Calculation results show that temperature and temperature difference between measured and calculated flue gas exit temperature is less than 20 .deg. C when flue gas inlet temperature is 620 .deg. C.

  19. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    Science.gov (United States)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  20. Mutanalyst, an online tool for assessing the mutational spectrum of epPCR libraries with poor sampling

    DEFF Research Database (Denmark)

    Ferla, Matteo

    2016-01-01

    of mutations per sequence it does so by fitting to a Poisson distribution, which is more robust than calculating the average in light of the small sampling size.Conclusion: As a result of the added measures to keep into account of small sample size the user can better assess whether the library is satisfactory...... of mutations of a specific nucleobase to another-is calculated enabling the user to make more informed predictions on library diversity and coverage. However, the calculations of the mutational spectrum are severely affected by the limited sample sizes.Results: Here an online program, called Mutanalyst...

  1. LALAGE - a computer program to calculate the TM01 modes of cylindrically symmetrical multicell resonant structures

    International Nuclear Information System (INIS)

    Fernandes, P.

    1982-01-01

    An improvement has been made to the LALA program to compute resonant frequencies and fields for all the modes of the lowest TM 01 band-pass of multicell structures. The results are compared with those calculated by another popular rf cavity code and with experimentally measured quantities. (author)

  2. A new calculation method adapted to the experimental conditions for determining samples γ-activities induced by 14 MeV neutrons

    International Nuclear Information System (INIS)

    Rzama, A.; Erramli, H.; Misdaq, M.A.

    1994-01-01

    Induced gamma-activities of different disk shaped irradiated samples and standards with 14 MeV neutrons have been determined by using a Monte Carlo calculation method adapted to the experimental conditions. The self-absorption of the multienergetic emitted gamma rays has been taken into account in the final samples activities. The influence of the different activation parameters has been studied. Na, K, Cl and P contents in biological (red beet) samples have been determined. ((orig.))

  3. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  4. Design a computational program to calculate the composition variations of nuclear materials in the reactor operations

    International Nuclear Information System (INIS)

    Mohmmadnia, Meysam; Pazirandeh, Ali; Sedighi, Mostafa; Bahabadi, Mohammad Hassan Jalili; Tayefi, Shima

    2013-01-01

    Highlights: ► The atomic densities of light and heavy materials are calculated. ► The solution is obtained using Runge–Kutta–Fehlberg method. ► The material depletion is calculated for constant flux and constant power condition. - Abstract: The present work investigates an appropriate way to calculate the variations of nuclides composition in the reactor core during operations. Specific Software has been designed for this purpose using C#. The mathematical approach is based on the solution of Bateman differential equations using a Runge–Kutta–Fehlberg method. Material depletion at constant flux and constant power can be calculated with this software. The inputs include reactor power, time step, initial and final times, order of Taylor Series to calculate time dependent flux, time unit, core material composition at initial condition (consists of light and heavy radioactive materials), acceptable error criterion, decay constants library, cross sections database and calculation type (constant flux or constant power). The atomic density of light and heavy fission products during reactor operation is obtained with high accuracy as the program outputs. The results from this method compared with analytical solution show good agreements

  5. A calculation program for harvesting and transportation costs of energy wood; Energiapuun korjuun ja kuljetuksen kustannuslaskentaohjelmisto

    Energy Technology Data Exchange (ETDEWEB)

    Kuitto, P J

    1997-12-31

    VTT Energy is compiling a large and versatile calculation program for harvesting and transportation costs of energy wood. The work has been designed and will be carried out in cooperation with Metsaeteho and Finntech Ltd. The program has been realised in Windows surroundings using SQLWindows graphical database application development system, using the SQLBase relational database management system. The objective of the research is to intensify and create new possibilities for comparison of the utilization costs and the profitability of integrated energy wood production chains with each other inside the chains

  6. A calculation program for harvesting and transportation costs of energy wood; Energiapuun korjuun ja kuljetuksen kustannuslaskentaohjelmisto

    Energy Technology Data Exchange (ETDEWEB)

    Kuitto, P.J.

    1996-12-31

    VTT Energy is compiling a large and versatile calculation program for harvesting and transportation costs of energy wood. The work has been designed and will be carried out in cooperation with Metsaeteho and Finntech Ltd. The program has been realised in Windows surroundings using SQLWindows graphical database application development system, using the SQLBase relational database management system. The objective of the research is to intensify and create new possibilities for comparison of the utilization costs and the profitability of integrated energy wood production chains with each other inside the chains

  7. Calculating Student Grades.

    Science.gov (United States)

    Allswang, John M.

    1986-01-01

    This article provides two short microcomputer gradebook programs. The programs, written in BASIC for the IBM-PC and Apple II, provide statistical information about class performance and calculate grades either on a normal distribution or based on teacher-defined break points. (JDH)

  8. A novel quantitative approach for eliminating sample-to-sample variation using a hue saturation value analysis program.

    Science.gov (United States)

    Yabusaki, Katsumi; Faits, Tyler; McMullen, Eri; Figueiredo, Jose Luiz; Aikawa, Masanori; Aikawa, Elena

    2014-01-01

    As computing technology and image analysis techniques have advanced, the practice of histology has grown from a purely qualitative method to one that is highly quantified. Current image analysis software is imprecise and prone to wide variation due to common artifacts and histological limitations. In order to minimize the impact of these artifacts, a more robust method for quantitative image analysis is required. Here we present a novel image analysis software, based on the hue saturation value color space, to be applied to a wide variety of histological stains and tissue types. By using hue, saturation, and value variables instead of the more common red, green, and blue variables, our software offers some distinct advantages over other commercially available programs. We tested the program by analyzing several common histological stains, performed on tissue sections that ranged from 4 µm to 10 µm in thickness, using both a red green blue color space and a hue saturation value color space. We demonstrated that our new software is a simple method for quantitative analysis of histological sections, which is highly robust to variations in section thickness, sectioning artifacts, and stain quality, eliminating sample-to-sample variation.

  9. Temporally stratified sampling programs for estimation of fish impingement

    International Nuclear Information System (INIS)

    Kumar, K.D.; Griffith, J.S.

    1977-01-01

    Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species

  10. Electricity decision-making: New techniques for calculating statewide economic impacts from new power supply and demand-side management programs

    Science.gov (United States)

    Tegen, Suzanne Isabel Helmholz

    This dissertation introduces new techniques for calculating and comparing statewide economic impacts from new coal, natural gas and wind power plants, as well as from demand-side management programs. The impetus for this work was two-fold. First, reviews of current literature and projects revealed that there was no standard way to estimate statewide economic impacts from new supply- and demand-side electricity options. Second, decision-makers who were interviewed stated that they were overwhelmed with data in general, but also lacked enough specific information about economic development impacts to their states from electricity, to make informed choices. This dissertation includes chapters on electricity decision-making and on economic impacts from supply and demand. The supply chapter compares different electricity options in three states which vary in natural resource content: Arizona, Colorado and Michigan. To account for differing capacity factors, resources are compared on a per-megawatt-hour basis. The calculations of economic impacts from new supply include: materials and labor for construction, operations, maintenance, fuel extraction, fuel transport, as well as property tax, financing and landowner revenues. The demand-side chapter compares residential, commercial and industrial programs in Iowa. Impact calculations include: incremental labor and materials for program planning, installation and operations, as well as sales taxes and electricity saved. Results from supply-side calculations in the three states analyzed indicate that adding new wind power can have a greater impact to a state's economy than adding new gas or coal power due to resource location, taxes and infrastructure. Additionally, demand-side management programs have a higher relative percentage of in-state dollar flow than supply-side solutions, though demand-side programs typically involve fewer MWh and dollars than supply-side generation. Methods for this dissertation include researching

  11. Program POD; A computer code to calculate nuclear elastic scattering cross sections with the optical model and neutron inelastic scattering cross sections by the distorted-wave born approximation

    International Nuclear Information System (INIS)

    Ichihara, Akira; Kunieda, Satoshi; Chiba, Satoshi; Iwamoto, Osamu; Shibata, Keiichi; Nakagawa, Tsuneo; Fukahori, Tokio; Katakura, Jun-ichi

    2005-07-01

    The computer code, POD, was developed to calculate angle-differential cross sections and analyzing powers for shape-elastic scattering for collisions of neutron or light ions with target nucleus. The cross sections are computed with the optical model. Angle-differential cross sections for neutron inelastic scattering can also be calculated with the distorted-wave Born approximation. The optical model potential parameters are the most essential inputs for those model computations. In this program, the cross sections and analyzing powers are obtained by using the existing local or global parameters. The parameters can also be inputted by users. In this report, the theoretical formulas, the computational methods, and the input parameters are explained. The sample inputs and outputs are also presented. (author)

  12. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  13. Program support of the automated system of planned calculations of the Oil and Gas Extracting Administration

    Energy Technology Data Exchange (ETDEWEB)

    Ashkinuze, V G; Reznikovskiy, P T

    1978-01-01

    An examination is made of the program support of the Automated System of Planned Calculations (ASPC) of the oil and Gas Extracting Administration (OGEA). Specific requirements for the ASPC of the OGEA are indicated and features of its program realization. In developing the program support of the system, an approach of parametric programming was used. A formal model of the ASPC OGEA is described in detail. It was formed in a theoretical-multiple language. Sets with structure of a tree are examined. They illustrate the production and administrative hierarchical structure of the planning objects in the oil region. The top of the tree corresponds to the OGEA as a whole. In the simplest realization, the tree has two levels of hierarchy: association and field. In general features, a procedure is described for possible use of the system by the planning workers. A plan is presented for program support of the ASPC OGEA, in light of whose specific nature a large part of the programs which realize this system are written in a language ASSEMBLER.

  14. Calculational model used in the analysis of nuclear performance of the Light Water Breeder Reactor (LWBR) (LWBR Development Program)

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.B. (ed.)

    1978-08-01

    The calculational model used in the analysis of LWBR nuclear performance is described. The model was used to analyze the as-built core and predict core nuclear performance prior to core operation. The qualification of the nuclear model using experiments and calculational standards is described. Features of the model include: an automated system of processing manufacturing data; an extensively analyzed nuclear data library; an accurate resonance integral calculation; space-energy corrections to infinite medium cross sections; an explicit three-dimensional diffusion-depletion calculation; a transport calculation for high energy neutrons; explicit accounting for fuel and moderator temperature feedback, clad diameter shrinkage, and fuel pellet growth; and an extensive testing program against experiments and a highly developed analytical standard.

  15. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    Science.gov (United States)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  16. 7 CFR 760.209 - Livestock payment calculations.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Livestock payment calculations. 760.209 Section 760..., DEPARTMENT OF AGRICULTURE SPECIAL PROGRAMS INDEMNITY PAYMENT PROGRAMS Emergency Assistance for Livestock, Honeybees, and Farm-Raised Fish Program § 760.209 Livestock payment calculations. (a) Payments for an...

  17. TRUMP3-JR: a finite difference computer program for nonlinear heat conduction problems

    International Nuclear Information System (INIS)

    Ikushima, Takeshi

    1984-02-01

    Computer program TRUMP3-JR is a revised version of TRUMP3 which is a finite difference computer program used for the solution of multi-dimensional nonlinear heat conduction problems. Pre- and post-processings for input data generation and graphical representations of calculation results of TRUMP3 are avaiable in TRUMP3-JR. The calculation equations, program descriptions and user's instruction are presented. A sample problem is described to demonstrate the use of the program. (author)

  18. Third version of a program for calculating the static interaction potential between an electron and a diatomic molecule

    International Nuclear Information System (INIS)

    Raseev, G.

    1980-01-01

    This program calculates the one-centre expansion of a two-centre wave function of a diatomic molecule and also the multipole expansion of its static interaction with a point charge. It is an extension to some classes of open-shell targets of the previous versions and it provides both the wave function and the potential in a form suitable for use in an electron-molecule scattering program. (orig./HSI)

  19. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines

    Science.gov (United States)

    Plummer, Niel; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.

    1988-01-01

    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  20. Computer program for the sensitivity calculation of a CR-39 detector in a diffusion chamber for radon measurements

    Energy Technology Data Exchange (ETDEWEB)

    Nikezic, D., E-mail: nikezic@kg.ac.rs; Stajic, J. M. [Faculty of Science, University of Kragujevac, R. Domanovica 12, Kragujevac 34000 (Serbia); Yu, K. N. [Department of Physics and Materials Science, City University of Hong Kong, 83 Tat Chee Avenue (Hong Kong)

    2014-02-15

    Computer software for calculation of the sensitivity of a CR-39 detector closed in a diffusion chamber to radon is described in this work. The software consists of two programs, both written in the standard Fortran 90 programming language. The physical background and a numerical example are given. Presented software is intended for numerous researches in radon measurement community. Previously published computer programs TRACK-TEST.F90 and TRACK-VISION.F90 [D. Nikezic and K. N. Yu, Comput. Phys. Commun. 174, 160 (2006); D. Nikezic and K. N. Yu, Comput. Phys. Commun. 178, 591 (2008)] are used here as subroutines to calculate the track parameters and to determine whether the track is visible or not, based on the incident angle, impact energy, etching conditions, gray level, and visibility criterion. The results obtained by the software, using five different V functions, were compared with the experimental data found in the literature. Application of two functions in this software reproduced experimental data very well, while other three gave lower sensitivity than experiment.

  1. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX

    Energy Technology Data Exchange (ETDEWEB)

    Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology

    2010-02-15

    In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)

  2. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX

    International Nuclear Information System (INIS)

    Alioli, Simone; Nason, Paolo; Oleari, Carlo; Re, Emanuele

    2010-02-01

    In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)

  3. Spreadsheet eases heat balance, payback calculations

    International Nuclear Information System (INIS)

    Conner, K.P.

    1992-01-01

    This paper reports that a generalized Lotus type spreadsheet program has been developed to perform the heat balance and simple payback calculations for various turbine-generator (TG) inlet steam pressures. It can be used for potential plant expansions or new cogeneration installations. The program performs the basic heat balance calculations that are associated with turbine-generator, feedwater heating process steam requirements and desuperheating. The printout, shows the basic data and formulation used in the calculations. The turbine efficiency data used are applicable for automatic extraction turbine-generators in the 30-80 MW range. Simple payback calculations are for chemical recovery boilers and power boilers used in the pulp and paper industry. However, the program will also accommodate boilers common to other industries

  4. The Brine Sampling and Evaluation Program (PSEP) at WIPP

    International Nuclear Information System (INIS)

    Deal, D.E.; Roggenthen, W.M.

    1989-01-01

    The Permian salt beds of the WIPP facility are virtually dry. The amount of water present in the rocks exposed in the excavations that is free to migrate under pressure gradients was estimated by heating salt samples to 95 degrees C and measuring weight loss. Clear balite contains about 0.22 weight percent water and the more argillaceous units average about 0.75 percent. Measurements made since 1984 as part of the Brine Sampling and Evaluation Program (BSEP) indicate that small amounts of this brine can migrate into the excavations and does accumulate in the underground environment. Brine seepage into drillholes monitored since thy were drilled show that brine seepage decreases with time and that many have dried up entirely. Weeping of brine from the walls of the repository excavations also decreases after two or more years. Chemical analyses of brines shows that they are sodium-chloride saturated and magnesium-rich

  5. A program for performing exact quantum dynamics calculations using cylindrical polar coordinates: A nanotube application

    Science.gov (United States)

    Skouteris, Dimitris; Gervasi, Osvaldo; Laganà, Antonio

    2009-03-01

    A program that uses the time-dependent wavepacket method to study the motion of structureless particles in a force field of quasi-cylindrical symmetry is presented here. The program utilises cylindrical polar coordinates to express the wavepacket, which is subsequently propagated using a Chebyshev expansion of the Schrödinger propagator. Time-dependent exit flux as well as energy-dependent S matrix elements can be obtained for all states of the particle (describing its angular momentum component along the nanotube axis and the excitation of the radial degree of freedom in the cylinder). The program has been used to study the motion of an H atom across a carbon nanotube. Program summaryProgram title: CYLWAVE Catalogue identifier: AECL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3673 No. of bytes in distributed program, including test data, etc.: 35 237 Distribution format: tar.gz Programming language: Fortran 77 Computer: RISC workstations Operating system: UNIX RAM: 120 MBytes Classification: 16.7, 16.10 External routines: SUNSOFT performance library (not essential) TFFT2D.F (Temperton Fast Fourier Transform), BESSJ.F (from Numerical Recipes, for the calculation of Bessel functions) (included in the distribution file). Nature of problem: Time evolution of the state of a structureless particle in a quasicylindrical potential. Solution method: Time dependent wavepacket propagation. Running time: 50000 secs. The test run supplied with the distribution takes about 10 minutes to complete.

  6. NRSC, Neutron Resonance Spectrum Calculation System

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2004-01-01

    1 - Description of program or function: The NRSC system is a package of four programs for calculating detailed neutron spectra and related quantities, for homogeneous mixtures of isotopes and cylindrical reactor pin cells, in the energy resonance region, using ENDF/B evaluated nuclear data pre-processed with NJOY or Cullen's codes up to the Doppler Broadening and unresolved resonance level. 2 - Methods: NRSC consists of four programs: GEXSCO, RMET21, ALAMBDA and WLUTIL. GEXSCO prepares the nuclear data from ENDF/B evaluated nuclear data pre-processed with NJOY or Cullen's codes up to the Doppler Broadening or unresolved resonance level for RMET21 input. RMET21 calculates spectra and related quantities for homogeneous mixtures of isotopes and cylindrical reactor pin cells, in the energy resonance region, using slowing-down algorithms and, in the case of pin cells, the collision probability method. ALAMBDA obtains lambda factors (Goldstein-Cohen intermediate resonance factors in the formalism of WIMSD code) of different isotopes for including on WIMSD-type multigroup libraries for WIMSD or other cell-codes, from output of RMET21 program. WLUTIL is an auxiliary program for extracting tabulated parameters related with RMET21 program calculations from WIMSD libraries for comparisons, and for producing new WIMSD libraries with parameters calculated with RMET21 and ALAMBDA programs. 3 - Restrictions on the complexity of the problem: GEXSCO program has fixed array dimensions that are suitable for processing all reasonable outputs from nuclear data pre-processing programs. RMET21 program uses variable dimension method from a fixed general array. ALAMBDA and WLUTIL programs have fixed arrays that are adapted to standard WIMSD libraries. All programs can be easily modified to adapt to special requirements

  7. CASKETSS-HEAT: a finite difference computer program for nonlinear heat conduction problems

    International Nuclear Information System (INIS)

    Ikushima, Takeshi

    1988-12-01

    A heat conduction program CASKETSS-HEAT has been developed. CASKETSS-HEAT is a finite difference computer program used for the solution of multi-dimensional nonlinear heat conduction problems. Main features of CASKETSS-HEAT are as follows. (1) One, two and three-dimensional geometries for heat conduction calculation are available. (2) Convection and radiation heat transfer of boundry can be specified. (3) Phase change and chemical change can be treated. (4) Finned surface heat transfer can be treated easily. (5) Data memory allocation in the program is variable according to problem size. (6) The program is a compatible heat transfer analysis program to the stress analysis program SAP4 and SAP5. (7) Pre- and post-processing for input data generation and graphic representation of calculation results are available. In the paper, brief illustration of calculation method, input data and sample calculation are presented. (author)

  8. Lot quality assurance sampling (LQAS) for monitoring a leprosy elimination program.

    Science.gov (United States)

    Gupte, M D; Narasimhamurthy, B

    1999-06-01

    In a statistical sense, prevalences of leprosy in different geographical areas can be called very low or rare. Conventional survey methods to monitor leprosy control programs, therefore, need large sample sizes, are expensive, and are time-consuming. Further, with the lowering of prevalence to the near-desired target level, 1 case per 10,000 population at national or subnational levels, the program administrator's concern will be shifted to smaller areas, e.g., districts, for assessment and, if needed, for necessary interventions. In this paper, Lot Quality Assurance Sampling (LQAS), a quality control tool in industry, is proposed to identify districts/regions having a prevalence of leprosy at or above a certain target level, e.g., 1 in 10,000. This technique can also be considered for identifying districts/regions at or below the target level of 1 per 10,000, i.e., areas where the elimination level is attained. For simulating various situations and strategies, a hypothetical computerized population of 10 million persons was created. This population mimics the actual population in terms of the empirical information on rural/urban distributions and the distribution of households by size for the state of Tamil Nadu, India. Various levels with respect to leprosy prevalence are created using this population. The distribution of the number of cases in the population was expected to follow the Poisson process, and this was also confirmed by examination. Sample sizes and corresponding critical values were computed using Poisson approximation. Initially, villages/towns are selected from the population and from each selected village/town households are selected using systematic sampling. Households instead of individuals are used as sampling units. This sampling procedure was simulated 1000 times in the computer from the base population. The results in four different prevalence situations meet the required limits of Type I error of 5% and 90% Power. It is concluded that

  9. Lunar and Meteorite Sample Education Disk Program - Space Rocks for Classrooms, Museums, Science Centers, and Libraries

    Science.gov (United States)

    Allen, Jaclyn; Luckey, M.; McInturff, B.; Huynh, P.; Tobola, K.; Loftin, L.

    2010-01-01

    NASA is eager for students and the public to experience lunar Apollo samples and meteorites first hand. Lunar rocks and soil, embedded in Lucite disks, are available for educators to use in their classrooms, museums, science centers, and public libraries for education activities and display. The sample education disks are valuable tools for engaging students in the exploration of the Solar System. Scientific research conducted on the Apollo rocks reveals the early history of our Earth-Moon system and meteorites reveal much of the history of the early solar system. The rocks help educators make the connections to this ancient history of our planet and solar system and the basic processes accretion, differentiation, impact and volcanism. With these samples, educators in museums, science centers, libraries, and classrooms can help students and the public understand the key questions pursued by many NASA planetary missions. The Office of the Curator at Johnson Space Center is in the process of reorganizing and renewing the Lunar and Meteorite Sample Education Disk Program to increase reach, security and accountability. The new program expands the reach of these exciting extraterrestrial rocks through increased access to training and educator borrowing. One of the expanded opportunities is that trained certified educators from science centers, museums, and libraries may now borrow the extraterrestrial rock samples. Previously the loan program was only open to classroom educators so the expansion will increase the public access to the samples and allow educators to make the critical connections to the exciting exploration missions taking place in our solar system. Each Lunar Disk contains three lunar rocks and three regolith soils embedded in Lucite. The anorthosite sample is a part of the magma ocean formed on the surface of Moon in the early melting period, the basalt is part of the extensive lunar mare lava flows, and the breccias sample is an important example of the

  10. Comparison of CFD-calculations of centrifugal compressor stages by NUMECA Fine Turbo and ANSYS CFX programs

    Science.gov (United States)

    Galerkin, Y. B.; Voinov, I. B.; Drozdov, A. A.

    2017-08-01

    Computational Fluid Dynamics (CFD) methods are widely used for centrifugal compressors design and flow analysis. The calculation results are dependent on the chosen software, turbulence models and solver settings. Two of the most widely applicable programs are NUMECA Fine Turbo and ANSYS CFX. The objects of the study were two different stages. CFD-calculations were made for a single blade channel and for full 360-degree flow paths. Stage 1 with 3D impeller and vaneless diffuser was tested experimentally. Its flow coefficient is 0.08 and loading factor is 0.74. For stage 1 calculations were performed with different grid quality, a different number of cells and different models of turbulence. The best results have demonstrated the Spalart-Allmaras model and mesh with 1.854 million cells. Stage 2 with return channel, vaneless diffuser and 3D impeller with flow coefficient 0.15 and loading factor 0.5 was designed by the known Universal Modeling Method. Its performances were calculated by the well identified Math model. Stage 2 performances by CFD calculations shift to higher flow rate in comparison with design performances. The same result was obtained for stage 1 in comparison with measured performances. Calculated loading factor is higher in both cases for a single blade channel. Loading factor performance calculated for full flow path (“360 degrees”) by ANSYS CFX is in satisfactory agreement with the stage 2 design performance. Maximum efficiency is predicted accurately by the ANSYS CFX “360 degrees” calculation. “Sector” calculation is less accurate. Further research is needed to solve the problem of performances mismatch.

  11. Casio Graphical Calculator Project.

    Science.gov (United States)

    Stott, Nick

    2001-01-01

    Shares experiences of a project aimed at developing and refining programs written on a Casio FX9750G graphing calculator. Describes in detail some programs used to develop mental strategies and problem solving skills. (MM)

  12. Calculation of gamma-rays and fast neutrons fluxes with the program Mercure-4

    International Nuclear Information System (INIS)

    Baur, A.; Dupont, C.; Totth, B.

    1978-01-01

    The program MERCURE-4 evaluates gamma ray or fast neutron attenuation, through laminated or bulky three-dimensionnal shields. The method used is that of line of sight point attenuation kernel, the scattered rays being taken into account by means of build-up factors for γ and removal cross sections for fast neutrons. The integration of the point kernel over the range of sources distributed in space and energy, is performed by the Monte-Carlo method, with an automatic adjustment of the importance functions. Since it is operationnal the program MERCURE-4 has been intensively used for many various problems, for example: - the calculation of gamma heating in reactor cores, control rods and shielding screens, as well as in experimental devices and irradiation loops; - the evaluation of fast neutron fluxes and corresponding damage in structural materials of reactors (vessel steels...); - the estimation of gamma dose rates on nuclear instrumentation in the reactors, around the reactor circuits and around spent fuel shipping casks

  13. Gamma-spectrometry of extended sources for analysing environmental samples

    International Nuclear Information System (INIS)

    Jarosievitz, B.

    1996-01-01

    Measurements of the environmental activity concentration by gamma spectrometers require the determination of the full-energy-peak efficiency as a function of photon energy over the detector range. This can be done by experiments or by calculation. For simple cases, experiments are straightforward, but if the decay scheme is complex, cascade effects modify detection efficiency. Also, actual detection efficiency depends on the detection geometry. All these effects are treated as corrections or modifications of the simple value cases which are especially relevant when applied to large volume of environmental samples. In this thesis calculations are made, using the GEANT MC program, for realistic experimental situations that have been performed, and these calculations are validated. The calculational and experimental results have been compared, and if it proves to be satisfactory, the results can be relied on even for cases when no direct experimental observation is possible. The general problems of gamma spectroscopy and correction problems are discussed. The two main tools, the experimental setup and the simulation program are described. A careful checking of the simulation results and the consequences are presented. (R.P.)

  14. 76 FR 41186 - Salmonella Verification Sampling Program: Response to Comments on New Agency Policies and...

    Science.gov (United States)

    2011-07-13

    ... Service [Docket No. FSIS-2008-0008] Salmonella Verification Sampling Program: Response to Comments on New Agency Policies and Clarification of Timeline for the Salmonella Initiative Program (SIP) AGENCY: Food... Federal Register notice (73 FR 4767- 4774), which described upcoming policy changes in the FSIS Salmonella...

  15. Methods for collecting algal samples as part of the National Water-Quality Assessment Program

    Science.gov (United States)

    Porter, Stephen D.; Cuffney, Thomas F.; Gurtz, Martin E.; Meador, Michael R.

    1993-01-01

    Benthic algae (periphyton) and phytoplankton communities are characterized in the U.S. Geological Survey's National Water-Quality Assessment Program as part of an integrated physical, chemical, and biological assessment of the Nation's water quality. This multidisciplinary approach provides multiple lines of evidence for evaluating water-quality status and trends, and for refining an understanding of the factors that affect water-quality conditions locally, regionally, and nationally. Water quality can be characterized by evaluating the results of qualitative and quantitative measurements of the algal community. Qualitative periphyton samples are collected to develop of list of taxa present in the sampling reach. Quantitative periphyton samples are collected to measure algal community structure within selected habitats. These samples of benthic algal communities are collected from natural substrates, using the sampling methods that are most appropriate for the habitat conditions. Phytoplankton samples may be collected in large nonwadeable streams and rivers to meet specific program objectives. Estimates of algal biomass (chlorophyll content and ash-free dry mass) also are optional measures that may be useful for interpreting water-quality conditions. A nationally consistent approach provides guidance on site, reach, and habitat selection, as well as information on methods and equipment for qualitative and quantitative sampling. Appropriate quality-assurance and quality-control guidelines are used to maximize the ability to analyze data locally, regionally, and nationally.

  16. Rio Blanco, Colorado, Long-Term Hydrologic Monitoring Program Sampling and Analysis Results for 2009

    International Nuclear Information System (INIS)

    2009-01-01

    The U.S. Department of Energy (DOE) Office of Legacy Management conducted annual sampling at the Rio Blanco, Colorado, Site, for the Long-Term Hydrologic Monitoring Program (LTHMP) on May 13 and 14, 2009. Samples were analyzed by the U.S. Environmental Protection Agency (EPA) Radiation&Indoor Environments National Laboratory in Las Vegas, Nevada. Samples were analyzed for gamma-emitting radionuclides by high-resolution gamma spectroscopy and tritium using the conventional and enriched methods.

  17. QM/MM hybrid calculation of biological macromolecules using a new interface program connecting QM and MM engines

    Energy Technology Data Exchange (ETDEWEB)

    Hagiwara, Yohsuke; Tateno, Masaru [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba Science City, Ibaraki 305-8571 (Japan); Ohta, Takehiro [Center for Computational Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba Science City, Ibaraki 305-8577 (Japan)], E-mail: tateno@ccs.tsukuba.ac.jp

    2009-02-11

    An interface program connecting a quantum mechanics (QM) calculation engine, GAMESS, and a molecular mechanics (MM) calculation engine, AMBER, has been developed for QM/MM hybrid calculations. A protein-DNA complex is used as a test system to investigate the following two types of QM/MM schemes. In a 'subtractive' scheme, electrostatic interactions between QM/MM regions are truncated in QM calculations; in an 'additive' scheme, long-range electrostatic interactions within a cut-off distance from QM regions are introduced into one-electron integration terms of a QM Hamiltonian. In these calculations, 338 atoms are assigned as QM atoms using Hartree-Fock (HF)/density functional theory (DFT) hybrid all-electron calculations. By comparing the results of the additive and subtractive schemes, it is found that electronic structures are perturbed significantly by the introduction of MM partial charges surrounding QM regions, suggesting that biological processes occurring in functional sites are modulated by the surrounding structures. This also indicates that the effects of long-range electrostatic interactions involved in the QM Hamiltonian are crucial for accurate descriptions of electronic structures of biological macromolecules.

  18. Results from the Interim Salt Disposition Program Macrobatch 11 Tank 21H Acceptance Samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bannochie, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-11-13

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of verification of Macrobatch (Salt Batch) 11 for the Interim Salt Disposition Program (ISDP) for processing. This document reports characterization data on the samples of Tank 21H and fulfills the requirements of Deliverable 3 of the Technical Task Request (TTR).

  19. Use of the 'DRAGON' program for the calculation of reactivity devices; Utilizacion del programa DRAGON para el calculo de mecanismos de reactividad

    Energy Technology Data Exchange (ETDEWEB)

    Mollerach, Ricardo; Fink, Jose [Nucleoelectrica Argentina SA (NASA), Buenos Aires (Argentina)

    2003-07-01

    DRAGON is a computer program developed at the Ecole Polytechnique of the University of Montreal and adopted by AECL for the transport calculations associated to reactivity devices. This report presents aspects of the implementation in NASA of the DRAGON program. Some cases of interest were evaluated. Comparisons with results of known programs as WIMS D5, and with experiments were done. a) Embalse (CANDU 6) cell without burnup and leakage. Calculations of macroscopic cross sections with WIMS and DRAGON show very good agreement with smaller differences in the thermal constants. b) Embalse fresh cell with different leakage options. c) Embalse cell with leakage and burnup. A comparison of k-infinity and k-effective with WIMS and DRAGON as a function of burnup shows that the differences ((D-W)/D) for fresh fuel are -0.17% roughly constant up to about 2500 MWd/tU, and then decrease to -0.06 % for 8500 MWd/tU. Experiments made in 1977 in ZED-2 critical facility, reported in [3], were used as a benchmark for the cell and supercell DRAGON calculations. Calculated fluxes were compared with experimental values and the agreement is so good. d) ZED-2 cell calculation. The measured buckling was used as geometric buckling. This case can be considered an experimental verification. The calculated reactivity with DRAGON is about 2 mk, and can be considered satisfactory. WIMS k-effective value is about one mk higher. e) Supercell calculations for ZED-2 vertical and horizontal tube and rod adjuster using 2D and 3D models were done. Comparisons between measured and calculated fluxes in the vicinity of the adjuster rods. Incremental cross sections for these adjusters were calculated using different options. f) ZED-2 reactor calculations with PUMA reveal a good concordance with critical heights measured in experiments. The report describes also particular features of the code and recommendations regarding its use that may be useful for new users. (author)

  20. Source term and activation calculations for the new TR-FLEX cyclotron for medical applications at HZDR

    Energy Technology Data Exchange (ETDEWEB)

    Konheiser, Joerg [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactor Safety; Ferrari, A. [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Inst. of Radiation Physics; Magin, A. [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany); Naumann, B. [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Dept. of Radiation Protection and Safety; Mueller, S.E.

    2017-06-01

    The neutron source terms for a proton beam hitting an {sup 18}O-enriched water target were calculated with the radiation transport programs MCNP6 and FLUKA and were compared to source terms for exclusive {sup 18}O(p,n){sup 18}F production. To validate the radiation fields obtained in the simulations, an experimental program has been started using activation samples originally used in reactor dosimetry.

  1. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  2. Environmental and emergency response capabilities of Los Alamos Scientific Laboratory's radiological air sampling program

    International Nuclear Information System (INIS)

    Gunderson, T.C.

    1980-05-01

    Environmental and emergency response radiological air sampling capabilities of the Environmental Surveillance Group at Los Alamos Scientific Laboratory are described. The air sampling program provides a supplementary check on the adequacy of containment and effluent controls, determines compliance with applicable protection guides and standards, and assesses potential environmental impacts on site environs. It also allows evaluation of potential individual and total population doses from airborne radionuclides that may be inhaled or serve as a source of external radiation. The environmental program is sufficient in scope to detect fluctuations and long-term trends in atmospheric levels of radioactivity originating onsite. The emergency response capabilities are designed to respond to both onsite unplanned releases and atmospheric nuclear tests

  3. Lunar and Meteorite Sample Education Disk Program — Space Rocks for Classrooms, Museums, Science Centers, and Libraries

    Science.gov (United States)

    Allen, J.; Luckey, M.; McInturff, B.; Huynh, P.; Tobola, K.; Loftin, L.

    2010-03-01

    NASA’s Lunar and Meteorite Sample Education Disk Program has Lucite disks containing Apollo lunar samples and meteorite samples that are available for trained educators to borrow for use in classrooms, museums, science center, and libraries.

  4. Self-Sampling for Human Papillomavirus Testing: Increased Cervical Cancer Screening Participation and Incorporation in International Screening Programs

    Science.gov (United States)

    Gupta, Sarah; Palmer, Christina; Bik, Elisabeth M.; Cardenas, Juan P.; Nuñez, Harold; Kraal, Laurens; Bird, Sara W.; Bowers, Jennie; Smith, Alison; Walton, Nathaniel A.; Goddard, Audrey D.; Almonacid, Daniel E.; Zneimer, Susan; Richman, Jessica; Apte, Zachary S.

    2018-01-01

    In most industrialized countries, screening programs for cervical cancer have shifted from cytology (Pap smear or ThinPrep) alone on clinician-obtained samples to the addition of screening for human papillomavirus (HPV), its main causative agent. For HPV testing, self-sampling instead of clinician-sampling has proven to be equally accurate, in particular for assays that use nucleic acid amplification techniques. In addition, HPV testing of self-collected samples in combination with a follow-up Pap smear in case of a positive result is more effective in detecting precancerous lesions than a Pap smear alone. Self-sampling for HPV testing has already been adopted by some countries, while others have started trials to evaluate its incorporation into national cervical cancer screening programs. Self-sampling may result in more individuals willing to participate in cervical cancer screening, because it removes many of the barriers that prevent women, especially those in low socioeconomic and minority populations, from participating in regular screening programs. Several studies have shown that the majority of women who have been underscreened but who tested HPV-positive in a self-obtained sample will visit a clinic for follow-up diagnosis and management. In addition, a self-collected sample can also be used for vaginal microbiome analysis, which can provide additional information about HPV infection persistence as well as vaginal health in general. PMID:29686981

  5. Automated one-loop calculations with GOSAM

    International Nuclear Information System (INIS)

    Cullen, Gavin; Greiner, Nicolas; Heinrich, Gudrun; Reiter, Thomas; Luisoni, Gionata

    2011-11-01

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  6. Automated one-loop calculations with GOSAM

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron, Zeuthen [DESY; Germany; Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)

  7. Kendall-Theil Robust Line (KTRLine--version 1.0)-A Visual Basic Program for Calculating and Graphing Robust Nonparametric Estimates of Linear-Regression Coefficients Between Two Continuous Variables

    Science.gov (United States)

    Granato, Gregory E.

    2006-01-01

    The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and

  8. Sampling plan design and analysis for a low level radioactive waste disposal program

    International Nuclear Information System (INIS)

    Hassig, N.L.; Wanless, J.W.

    1989-01-01

    Low-level wastes that are candidates for BRC (below regulatory concern) disposal must be subjected to an extensive monitoring program to insure the wastes meet (potential) bulk property and contamination concentration BRC criteria for disposal. This paper addresses the statistical implications of using various methods to verify BRC criteria. While surface and volumetric monitoring each have their advantages and disadvantages, a dual, sequential monitoring process is the preferred choice from a statistical reliability perspective. With dual monitoring, measurements on the contamination are verifiable, and sufficient to allow for a complete characterization of the wastes. As these characterizations become more reliable and stable, something less than 100% sampling may be possible for release of wastes for BRC disposal. This paper provides a survey of the issues involved in the selection of a monitoring and sampling program for the disposal of BRC wastes

  9. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    Science.gov (United States)

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  10. A code for the calculation of self-absorption fractions of photons

    International Nuclear Information System (INIS)

    Jaegers, P.; Landsberger, S.

    1988-01-01

    Neutron activation analysis (NAA) is now a well-established technique used by many researchers and commercial companies. It is often wrongly assumed that these NAA methods are matrix independent over a wide variety of samples. Accuracy at the level of a few percent is often difficult to achieve, since components such as timing, pulse pile-up, high dead-time corrections, sample positioning, and chemical separations may severely compromise the results. One area that has received little attention is the calculation of the effect of self-absorption of gamma-rays (including low-energy ones) in samples, particularly those with major components of high-Z values. The analysis of trace components in lead samples is an obvious example, but other high-Z matrices such as various permutations and combinations of zinc, tin, lead, copper, silver, antimony, etc.; ore concentrates; and meteorites are also affected. The authors have developed a simple but effective personal-computer-compatible user-friendly code, however, which can calculate the amount of energy signal that is lost due to the presence of any amount of one or more Z components. The program is based on Dixon's paper of 1951 for the calculation of self-absorption corrections for linear, cylindrical, and spherical sources. To determine the self-absorption fraction of a photon in a source, the FORTRAN computer code SELFABS was written

  11. Programs and subroutines for calculating cadmium body burdens based on a one-compartment model

    International Nuclear Information System (INIS)

    Robinson, C.V.; Novak, K.M.

    1980-08-01

    A pair of FORTRAN programs for calculating the body burden of cadmium as a function of age is presented, together with a discussion of the assumptions which serve to specify the underlying, one-compartment model. Account is taken of the contributions to the body burden from food, from ambient air, from smoking, and from occupational inhalation. The output is a set of values for ages from birth to 90 years which is either longitudinal (for a given year of birth) or cross-sectional (for a given calendar year), depending on the choice of input parameters

  12. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Derivations and Verification of Plans. Volume 1

    Science.gov (United States)

    Johnson, Kenneth L.; White, K, Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.

  13. Mathematical programmes for calculator type PTK 1072 to calculate the radioactive contamination in foods

    International Nuclear Information System (INIS)

    Varga, E.; Visi, Gy.

    1982-01-01

    Mathematical programmes are given for calculator type PTK 1072 (Hungarian made), to make easier the lengthy calculations applied in examinations in laboratories for control of radioactive materials in food. Basic consideration of making a programme, the method, the mathematical formulae, the variations of calculation and control of program are shown by examples. Making programmes for calculators of other types, too, can be facilitated by adapting the basic consideration. (author)

  14. User's manual for the National Water-Quality Assessment Program Invertebrate Data Analysis System (IDAS) software, version 5

    Science.gov (United States)

    Cuffney, Thomas F.; Brightbill, Robin A.

    2011-01-01

    The Invertebrate Data Analysis System (IDAS) software was developed to provide an accurate, consistent, and efficient mechanism for analyzing invertebrate data collected as part of the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program. The IDAS software is a stand-alone program for personal computers that run Microsoft Windows(Registered). It allows users to read data downloaded from the NAWQA Program Biological Transactional Database (Bio-TDB) or to import data from other sources either as Microsoft Excel(Registered) or Microsoft Access(Registered) files. The program consists of five modules: Edit Data, Data Preparation, Calculate Community Metrics, Calculate Diversities and Similarities, and Data Export. The Edit Data module allows the user to subset data on the basis of taxonomy or sample type, extract a random subsample of data, combine or delete data, summarize distributions, resolve ambiguous taxa (see glossary) and conditional/provisional taxa, import non-NAWQA data, and maintain and create files of invertebrate attributes that are used in the calculation of invertebrate metrics. The Data Preparation module allows the user to select the type(s) of sample(s) to process, calculate densities, delete taxa on the basis of laboratory processing notes, delete pupae or terrestrial adults, combine lifestages or keep them separate, select a lowest taxonomic level for analysis, delete rare taxa on the basis of the number of sites where a taxon occurs and (or) the abundance of a taxon in a sample, and resolve taxonomic ambiguities by one of four methods. The Calculate Community Metrics module allows the user to calculate 184 community metrics, including metrics based on organism tolerances, functional feeding groups, and behavior. The Calculate Diversities and Similarities module allows the user to calculate nine diversity and eight similarity indices. The Data Export module allows the user to export data to other software packages (CANOCO, Primer

  15. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  16. PCXMC. A PC-based Monte Carlo program for calculating patient doses in medical x-ray examinations

    International Nuclear Information System (INIS)

    Tapiovaara, M.; Lakkisto, M.; Servomaa, A.

    1997-02-01

    The report describes PCXMC, a Monte Carlo program for calculating patients' organ doses and the effective dose in medical x-ray examinations. The organs considered are: the active bone marrow, adrenals, brain, breasts, colon (upper and lower large intestine), gall bladder, heats, kidneys, liver, lungs, muscle, oesophagus, ovaries, pancreas, skeleton, skin, small intestine, spleen, stomach, testes, thymes, thyroid, urinary bladder, and uterus. (42 refs.)

  17. Efficient computer program EPAS-J1 for calculating stress intensity factors of three-dimensional surface cracks

    International Nuclear Information System (INIS)

    Miyazaki, Noriyuki; Watanabe, Takayuki; Yagawa, Genki.

    1982-03-01

    A finite element computer program EPAS-J1 was developed to calculate the stress intensity factors of three-dimensional cracks. In the program, the stress intensity factor is determined by the virtual crack extension method together with the distorted elements allocated along the crack front. This program also includes the connection elements based on the Lagrange multiplier concept to connect such different kinds of elements as the solid and shell elements, or the shell and beam elements. For the structure including three-dimensional surface cracks, the solid elements are employed only at the neighborhood of a surface crack, while the remainder of the structure is modeled by the shell or beam elements due to the reason that the crack singularity is very local. Computer storage and computational time can be highly reduced with the application of the above modeling technique for the calculation of the stress intensity factors of the three-dimensional surface cracks, because the three-dimensional solid elements are required only around the crack front. Several numerical analyses were performed by the EPAS-J1 program. At first, the accuracies of the connection element and the virtual crack extension method were confirmed using the simple structures. Compared with other techniques of connecting different kinds of elements such as the tying method or the method using anisotropic plate element, the present connection element is found to provide better results than the others. It is also found that the virtual crack extension method provides the accurate stress intensity factor. Furthermore, the results are also presented for the stress intensity factor analyses of cylinders with longitudinal or circumferential surface cracks using the combination of the various kinds of elements together with the connection elements. (author)

  18. Description and application of the EAP computer program for calculating life-cycle energy use and greenhouse gas emissions of household consumption items

    NARCIS (Netherlands)

    Benders, R.M.J.; Wilting, H.C.; Kramer, K.J.; Moll, H.C.

    2001-01-01

    Focusing on reduction in energy use and greenhouse gas emissions, a life-cycle-based analysis tool has been developed. The energy analysis program (EAP) is a computer program for determining energy use and greenhouse gas emissions related to household consumption items, using a hybrid calculation

  19. SHIELD 1.0: development of a shielding calculator program in diagnostic radiology; SHIELD 1.0: desenvolvimento de um programa de calculo de blindagem em radiodiagnostico

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Romulo R.; Real, Jessica V.; Luz, Renata M. da [Hospital Sao Lucas (PUCRS), Porto Alegre, RS (Brazil); Friedrich, Barbara Q.; Silva, Ana Maria Marques da, E-mail: ana.marques@pucrs.br [Pontificia Universidade Catolica do Rio Grande do Sul (PUCRS), Porto Alegre, RS (Brazil)

    2013-08-15

    In shielding calculation of radiological facilities, several parameters are required, such as occupancy, use factor, number of patients, source-barrier distance, area type (controlled and uncontrolled), radiation (primary or secondary) and material used in the barrier. The shielding design optimization requires a review of several options about the physical facility design and, mainly, the achievement of the best cost-benefit relationship for the shielding material. To facilitate the development of this kind of design, a program to calculate the shielding in diagnostic radiology was implemented, based on data and limits established by National Council on Radiation Protection and Measurements (NCRP) 147 and SVS-MS 453/98. The program was developed in C⌗ language, and presents a graphical interface for user data input and reporting capabilities. The module initially implemented, called SHIELD 1.0, refers to calculating barriers for conventional X-ray rooms. The program validation was performed by the comparison with the results of examples of shielding calculations presented in NCRP 147.

  20. Input data for ALMOD - Calculation memory (ALMOD3W2 version)

    International Nuclear Information System (INIS)

    Madeira, A.A.; Dominguez, L.M.F.

    1987-01-01

    This work presents the data set calculations and references for the up-to-data ALMOD version for the simulation of plant transients at nominal or accident analysis conditions (conservative hypothesis). Some modifications implemented in the ALMOD source program as a consequence of this work and a Sample-Case referred to an Uncontrolled Rod Cluster Control Assembly Bank Withdrawal Transient at Power are in Appendixes A and B respectively. (author)

  1. Influence of sampling frequency and load calculation methods on quantification of annual river nutrient and suspended solids loads.

    Science.gov (United States)

    Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey

    2018-01-11

    Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise

  2. Computer automation for protection factor calculations of buildings

    International Nuclear Information System (INIS)

    Farafat, M.A.Z.; Madian, A.H.

    2011-01-01

    The protection factors of buildings are different according to the constructional and architectural specifications. Uk and USA performed a calculation using manual method to calculate the protection factor for any building which may protect the people in it from gamma rays and fall-out.The manual calculation method is very complex which is very difficult to use, for that reason the researchers simplify this method in proposed form which will be easy to understand and use. Also the researchers have designed a computer program ,in visual basic, to calculate the different protection factors for buildings. The program aims to provide the missing time in the calculation processes to calculate the protection in some spaces for any building through entering specifications data for any building .The program will modify the protection factor in very short time which will save the effort and time in comparison with the manual calculation.

  3. Rapid Gamma Screening of Shipments of Analytical Samples to Meet DOT Regulations

    International Nuclear Information System (INIS)

    Wojtaszek, P.A.; Remington, D.L.; Ideker-Mulligan, V.

    2006-01-01

    The accelerated closure program at Rocky Flats required the capacity to ship up to 1000 analytical samples per week to off-site commercial laboratories, and to conduct such shipment within 24 hours of sample collection. During a period of near peak activity in the closure project, a regulatory change significantly increased the level of radionuclide data required for shipment of each package. In order to meet these dual challenges, a centralized and streamlined sample management program was developed which channeled analytical samples through a single, high-throughput radiological screening facility. This trailerized facility utilized high purity germanium (HPGe) gamma spectrometers to conduct screening measurements of entire packages of samples at once, greatly increasing throughput compared to previous methods. The In Situ Object Counting System (ISOCS) was employed to calibrate the HPGe systems to accommodate the widely varied sample matrices and packing configurations encountered. Optimum modeling and configuration parameters were determined. Accuracy of the measurements of grouped sample jars was confirmed with blind samples in multiple configurations. Levels of radionuclides not observable by gamma spectroscopy were calculated utilizing a spreadsheet program that can accommodate isotopic ratios for large numbers of different waste streams based upon acceptable knowledge. This program integrated all radionuclide data and output all information required for shipment, including the shipping class of the package. (authors)

  4. Development and verification of an excel program for calculation of monitor units for tangential breast irradiation with external photon beams

    International Nuclear Information System (INIS)

    Woldemariyam, M.G.

    2015-07-01

    The accuracy of MU calculation performed with Prowess Panther TPS (for Co-60) and Oncentra (for 6MV and 15MV x-rays) for tangential breast irradiation was evaluated with measurements made in an anthropomorphic phantom using calibrated Gafchromic EBT2 films. Excel programme which takes in to account external body surface irregularity of an intact breast or chest wall (hence absence of full scatter condition) using Clarkson’s sector summation technique was developed. A single surface contour of the patient obtained in a transverse plane containing the MU calculation point was required for effective implementation of the programme. The outputs of the Excel programme were validated with the respective outputs from the 3D treatment planning systems. The variations between the measured point doses and their calculated counterparts by the TPSs were within the range of -4.74% to 4.52% (mean of -1.33% and SD of 2.69) for the prowess panther TPS and -4.42% to 3.14% (mean of -1.47% and SD of -3.95) for the Oncentra TPS. The observed degree of deviation may be attributed to limitations of the dose calculation algorithm within the TPSs, set up inaccuracies of the phantom during irradiation and inherent uncertainties associated with radiochromic film dosimetry. The percentage deviations between MUs calculated with the two TPSs and the Excel program were within the range of -3.45% and 3.82% (mean of 0.83% and SD of 2.25). The observed percentage deviations are within the 4% action level recommended by TG-114. This indicates that the Excel program can be confidently employed for calculation of MUs for 2D planned tangential breast irradiations or to independently verify MUs calculated with another calculation methods. (au)

  5. CAL3JHH: a Java program to calculate the vicinal coupling constants (3J H,H) of organic molecules.

    Science.gov (United States)

    Aguirre-Valderrama, Alonso; Dobado, José A

    2008-12-01

    Here, we present a free web-accessible application, developed in the JAVA programming language for the calculation of vicinal coupling constant (3J(H,H)) of organic molecules with the H-Csp3-Csp3-H fragment. This JAVA applet is oriented to assist chemists in structural and conformational analyses, allowing the user to calculate the averaged 3J(H,H) values among conformers, according to its Boltzmann populations. Thus, the CAL3JHH program uses the Haasnoot-Leeuw-Altona equation, and, by reading the molecule geometry from a protein data bank (PDB) file format or from multiple pdb files, automatically detects all the coupled hydrogens, evaluating the data needed for this equation. Moreover, a "Graphical viewer" menu allows the display of the results on the 3D molecule structure, as well as the plotting of the Newman projection for the couplings.

  6. Development of a depletion program for the calculation of the 3D-burn-up dependent power distributions in light water reactors

    International Nuclear Information System (INIS)

    Bennewitz, F.; Mueller, A.; Wagner, M.R.

    1977-11-01

    Based on the nodal collision probability method a multi-dimensional reactor burn-up program MEDIUM has been developed, which is written for 2 neutron energy groups. It is characterized by high computing speed, considerable generality and flexibility, a number of useful program options and good accuracy. The three-dimensional flux calculation model is described, the formulation and method of solution of the nuclear depletion equations and further details of the program structure. The results of a number of comparisons with experimental data and with independent computer programs are presented. (orig.) [de

  7. Monte Carlo Calculation of Thermal Neutron Inelastic Scattering Cross Section Uncertainties by Sampling Perturbed Phonon Spectra

    Science.gov (United States)

    Holmes, Jesse Curtis

    established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.

  8. Middlesex Sampling Plant environmental report for calendar year 1989, Middlesex, New Jersey

    International Nuclear Information System (INIS)

    1990-05-01

    The environmental monitoring program, which began in 1980, was continued in 1989 at the former Middlesex Sampling Plant (MSP) site, located in the Borough of Middlesex, New Jersey. The MSP site is part of the Formerly Utilized Sites Remedial Action Program (FUSRAP), a Department of Energy (DOE) program to decontaminate or otherwise control sites where residual radioactive materials remain either from the early years of the nation's atomic energy program or from commercial operations causing conditions that Congress has authorized DOE to remedy. The monitoring program at MSP measures radon concentrations in air; external gamma radiation levels; and uranium and radium concentrations in surface water, groundwater, and sediment. Additionally, several nonradiological parameters are measured in groundwater samples. To verify that the site is in compliance with the DOE radiation protection standard (100 mrem/yr) and to assess its potential effect on public health, the radiation dose was calculated for a hypothetical maximally exposed individual. This report presents the findings of the environmental monitoring program conducted in the area of the Middlesex Sampling Plant (MSP) site during calendar year 1989. 17 refs., 16 figs., 16 tabs

  9. Middlesex Sampling Plant environmental report for calendar year 1989, Middlesex, New Jersey

    Energy Technology Data Exchange (ETDEWEB)

    1990-05-01

    The environmental monitoring program, which began in 1980, was continued in 1989 at the former Middlesex Sampling Plant (MSP) site, located in the Borough of Middlesex, New Jersey. The MSP site is part of the Formerly Utilized Sites Remedial Action Program (FUSRAP), a Department of Energy (DOE) program to decontaminate or otherwise control sites where residual radioactive materials remain either from the early years of the nation's atomic energy program or from commercial operations causing conditions that Congress has authorized DOE to remedy. The monitoring program at MSP measures radon concentrations in air; external gamma radiation levels; and uranium and radium concentrations in surface water, groundwater, and sediment. Additionally, several nonradiological parameters are measured in groundwater samples. To verify that the site is in compliance with the DOE radiation protection standard (100 mrem/yr) and to assess its potential effect on public health, the radiation dose was calculated for a hypothetical maximally exposed individual. This report presents the findings of the environmental monitoring program conducted in the area of the Middlesex Sampling Plant (MSP) site during calendar year 1989. 17 refs., 16 figs., 16 tabs.

  10. Using the sampling method to propagate uncertainties of physical parameters in systems with fissile material

    International Nuclear Information System (INIS)

    Campolina, Daniel de Almeida Magalhães

    2015-01-01

    There is an uncertainty for all the components that comprise the model of a nuclear system. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a realistic calculation that has been replacing conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. By analyzing the propagated uncertainty to the effective neutron multiplication factor (k eff ), the effects of the sample size, computational uncertainty and efficiency of a random number generator to represent the distributions that characterize physical uncertainty in a light water reactor was investigated. A program entitled GB s ample was implemented to enable the application of the random sampling method, which requires an automated process and robust statistical tools. The program was based on the black box model and the MCNPX code was used in and parallel processing for the calculation of particle transport. The uncertainties considered were taken from a benchmark experiment in which the effects in k eff due to physical uncertainties is done through a conservative method. In this work a script called GB s ample was implemented to automate the sampling based method, use multiprocessing and assure the necessary robustness. It has been found the possibility of improving the efficiency of the random sampling method by selecting distributions obtained from a random number generator in order to obtain a better representation of uncertainty figures. After the convergence of the method is achieved, in order to reduce the variance of the uncertainty propagated without increase in computational time, it was found the best number o components to be sampled. It was also observed that if the sampling method is used to calculate the effect on k eff due to physical uncertainties reported by

  11. Hyperfine electric parameters calculation in Si samples irradiated with 57Mn

    International Nuclear Information System (INIS)

    Abreu, Y.; Cruz, C. M.; Pinnera, I.; Leyva, A.; Van Espen, P.; Perez, C.

    2011-01-01

    The radiation damage created in silicon crystalline material by 57 Mn→ 57 Fe ion implantation was characterized by Moessbauer spectroscopy showing three main lines, assigned to: substitutional, interstitial and a damage configuration sites of the implanted ions. The hyperfine electric parameters, Quadrupole Splitting and Isomer Shift, were calculated for various implantation environments. In the calculations the full potential linearized-augmented plane-wave plus local orbitals (L/APW+lo) method as embodied in the WIEN2k code was used. Good agreement was found between the experimental and the calculated values for some implantation configurations; suggesting that the implantation environments could be similar to the ones proposed by the authors. (Author)

  12. Shielding Calculations on Waste Packages - The Limits and Possibilities of different Calculation Methods by the example of homogeneous and inhomogeneous Waste Packages

    Science.gov (United States)

    Adams, Mike; Smalian, Silva

    2017-09-01

    For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.

  13. CO2 flowrate calculator

    International Nuclear Information System (INIS)

    Carossi, Jean-Claude

    1969-02-01

    A CO 2 flowrate calculator has been designed for measuring and recording the gas flow in the loops of Pegase reactor. The analog calculator applies, at every moment, Bernoulli's formula to the values that characterize the carbon dioxide flow through a nozzle. The calculator electronics is described (it includes a sampling calculator and a two-variable function generator), with its amplifiers, triggers, interpolator, multiplier, etc. Calculator operation and setting are presented

  14. Brine Sampling and Evaluation Program: 1988 report

    International Nuclear Information System (INIS)

    Deal, D.E.; Abitz, R.J.; Case, J.B.; Crawley, M.E.; Deshler, R.M.; Drez, P.E.; Givens, C.A.; King, R.B.; Myers, J.; Pietz, J.M.; Roggenthen, W.M.; Tyburski, J.R.; Belski, D.S.; Niou, S.; Wallace, M.G.

    1989-12-01

    The data presented in this report are the result of Brine Sampling and Evaluation Program (BSEP) activities at the Waste Isolation Pilot Plant (WIPP) during 1988. These activities, which are a continuation and update of studies that began in 1982 as part of the Site Validation Program, were formalized as the BSEP in 1985 to document and investigate the origins, hydraulic characteristics, extent, and composition of brine occurrences in the Permian Salado Formation, and seepage of that brine into the excavations at the WIPP. Previous BSEP reports (Deal and Case, 1987; Deal and others, 1987) described the results of ongoing activities that monitor brine inflow into boreholes in the facility, moisture content of the Salado Formation, brine geochemistry, and brine weeps and crusts. The information provided in this report updates past work and describes progress made during the calendar year 1988. During 1988, BSEP activities focused on four major areas to describe and quantify brine activity: (1) monitoring of brine inflow parameters, e.g., measuring brines recovered from holes drilled upward from the underground drifts (upholes), downward from the underground drifts (downholes), and near-horizontal holes; (2) characterizing the brine, e.g., the geochemistry of the brine and the presence of bacteria and their possible interactions with experiments and operations; (3) characterizing formation properties associated with the occurrence of brine; e.g., determining the water content of various geologic units, examining these units in boreholes using a video camera system, and measuring their resistivity (conductivity); and (4) modeling to examine the interaction of salt deformation near the workings and brine seepage through the deforming salt. 77 refs., 48 figs., 32 tabs

  15. DIGA/NSL new calculational model in slab geometry

    International Nuclear Information System (INIS)

    Makai, M.; Gado, J.; Kereszturi, A.

    1987-04-01

    A new calculational model is presented based on a modified finite-difference algorithm, in which the coefficients are determined by means of the so-called gamma matrices. The DIGA program determines the gamma matrices and the NSL program realizes the modified finite difference model. Both programs assume slab cell geometry, DIGA assumes 2 energy groups and 3 diffusive regions. The DIGA/NSL programs serve to study the new calculational model. (author)

  16. Effect of sample matrix composition on INAA sample weights, measurement precisions, limits of detection, and optimum conditions

    International Nuclear Information System (INIS)

    Guinn, V.P.; Nakazawa, L.; Leslie, J.

    1984-01-01

    The instrumental neutron activation analysis (INAA) Advance Prediction Computer Program (APCP) is extremely useful in guiding one to optimum subsequent experimental analyses of samples of all types of matrices. By taking into account the contributions to the cumulative Compton-continuum levels from all significant induced gamma-emitting radionuclides, it provides good INAA advance estimates of detectable photopeaks, measurement precisions, concentration lower limits of detection (LOD's) and optimum irradiation/decay/counting conditions - as well as of the very important maximum allowable sample size for each set of conditions calculated. The usefulness and importance of the four output parameters cited in the title are discussed using the INAA APCP outputs for NBS SRM-1632 Coal as the example

  17. Development of an inventory/archive program for the retention, management, and disposition of tank characterization samples at the 222-S laboratory

    International Nuclear Information System (INIS)

    Seidel, C.M.

    1998-01-01

    The Hanford Tank Waste Remediation Systems (TWRS) Characterization Program is responsible for coordinating the sampling and analysis of the 177 large underground storage tanks at the Hanford site. The 222-S laboratory has been the primary laboratory for chemical analysis of this highly-radioactive material and has been accumulating these samples for many years. As part of the Fiscal Year 1998 laboratory work scope, the 222-S laboratory has performed a formal physical inventory of all tank characterization samples which are currently being stored. In addition, an updated inventory/archive program has been designed. This program defines sample storage, retention, consolidation, maintenance, and disposition activities which will ensure that the sample integrity is preserved to the greatest practical extent. In addition, the new program provides for continued availability of waste material in a form which will be useful for future bench-scale studies. Finally, when the samples have exceeded their useful lifetime, the program provides for sample disposition from,the laboratory in a controlled, safe and environmentally compliant manner. The 222-S laboratory maintains custody over samples of tank waste material which have been shipped to the laboratory for chemical analysis. The storage of these samples currently requires an entire hotcell, fully dedicated to sample archive storage, and is rapidly encroaching on additional hotcell space. As additional samples are received, they are beginning to limit the 222-S laboratory hotcell utility for other activities such as sample extrusion and subsampling. The 222-S laboratory tracks the number of sample containers and the mass of each sample through an internal database which has recently been verified and updated via a physical inventory

  18. FastChem: A computer program for efficient complex chemical equilibrium calculations in the neutral/ionized gas phase with applications to stellar and planetary atmospheres

    Science.gov (United States)

    Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin

    2018-06-01

    For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.

  19. Estimates of Radionuclide Loading to Cochiti Lake from Los Alamos Canyon Using Manual and Automated Sampling

    Energy Technology Data Exchange (ETDEWEB)

    McLean, Christopher T. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-07-01

    Los Alamos National Laboratory has a long-standing program of sampling storm water runoff inside the Laboratory boundaries. In 1995, the Laboratory started collecting the samples using automated storm water sampling stations; prior to this time the samples were collected manually. The Laboratory has also been periodically collecting sediment samples from Cochiti Lake. This paper presents the data for Pu-238 and Pu-239 bound to the sediments for Los Alamos Canyon storm water runoff and compares the sampling types by mass loading and as a percentage of the sediment deposition to Cochiti Lake. The data for both manual and automated sampling are used to calculate mass loads from Los Alamos Canyon on a yearly basis. The automated samples show mass loading 200- 500 percent greater for Pu-238 and 300-700 percent greater for Pu-239 than the manual samples. Using the mean manual flow volume for mass loading calculations, the automated samples are over 900 percent greater for Pu-238 and over 1800 percent greater for Pu-239. Evaluating the Pu-238 and Pu-239 activities as a percentage of deposition to Cochiti Lake indicates that the automated samples are 700-1300 percent greater for Pu- 238 and 200-500 percent greater for Pu-239. The variance was calculated by two methods. The first method calculates the variance for each sample event. The second method calculates the variances by the total volume of water discharged in Los Alamos Canyon for the year.

  20. SCRAM: a program for calculating scram times

    International Nuclear Information System (INIS)

    Bourquin, R.D.; Birney, K.R.

    1975-01-01

    Prediction of scram times is one facet of design analysis for control rod assemblies. Parameters for the entire control rod sub-system must be considered in such analyses and experimental verification is used when it is available. The SCRAM computer program was developed for design analysis of improved control rod assemblies for the Fast Flux Test Facility (FFTF). A description of the evolution of the program from basic equations to a functional design analysis tool is presented