WorldWideScience

Sample records for programmed sample calculations

  1. Program for TI programmable 59 calculator for calculation of 3H concentration of water samples

    International Nuclear Information System (INIS)

    Hussain, S.D.; Asghar, G.

    1982-09-01

    A program has been developed for TI Programmable 59 Calculator of Texas Instruments Inc. to calculate from the observed parameters such as count rate etc. the 3 H (tritium) concentration of water samples processed with/without prior electrolytic enrichment. Procedure to use the program has been described in detail. A brief description of the laboratory treatment of samples and the mathematical equations used in the calculations have been given. (orig./A.B.)

  2. Program GWPROB: Calculation of inflow to groundwater measuring points during sampling

    International Nuclear Information System (INIS)

    Kaleris, V.

    1990-01-01

    The program GWPROB was developed by the DFG task group for modelling of large-area heat and pollutant transport in groundwater at the Institute of Hydrological Engineering, Hydraulics and Groundwater Department. The project was funded by the Deutsche Forschungsgemeinschaft. (BBR) [de

  3. Computer programs for lattice calculations

    International Nuclear Information System (INIS)

    Keil, E.; Reich, K.H.

    1984-01-01

    The aim of the workshop was to find out whether some standardisation could be achieved for future work in this field. A certain amount of useful information was unearthed, and desirable features of a ''standard'' program emerged. Progress is not expected to be breathtaking, although participants (practically from all interested US, Canadian and European accelerator laboratories) agreed that the mathematics of the existing programs is more or less the same. Apart from the NIH (not invented here) effect, there is a - to quite some extent understandable - tendency to stay with a program one knows and to add to it if unavoidable rather than to start using a new one. Users of the well supported program TRANSPORT (designed for beam line calculations) would prefer to have it fully extended for lattice calculations (to some extent already possible now), while SYNCH users wish to see that program provided with a user-friendly input, rather than spending time and effort for mastering a new program

  4. Calculation methods in program CCRMN

    Energy Technology Data Exchange (ETDEWEB)

    Chonghai, Cai [Nankai Univ., Tianjin (China). Dept. of Physics; Qingbiao, Shen [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    CCRMN is a program for calculating complex reactions of a medium-heavy nucleus with six light particles. In CCRMN, the incoming particles can be neutrons, protons, {sup 4}He, deuterons, tritons and {sup 3}He. the CCRMN code is constructed within the framework of the optical model, pre-equilibrium statistical theory based on the exciton model and the evaporation model. CCRMN is valid in 1{approx} MeV energy region, it can give correct results for optical model quantities and all kinds of reaction cross sections. This program has been applied in practical calculations and got reasonable results.

  5. SNS Sample Activation Calculator Flux Recommendations and Validation

    Energy Technology Data Exchange (ETDEWEB)

    McClanahan, Tucker C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Gallmeier, Franz X. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Lu, Wei [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS)

    2015-02-01

    The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.

  6. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. 10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle is...

  8. Some calculator programs for particle physics

    International Nuclear Information System (INIS)

    Wohl, C.G.

    1982-01-01

    Seven calculator programs that do simple chores that arise in elementary particle physics are given. LEGENDRE evaluates the Legendre polynomial series Σa/sub n/P/sub n/(x) at a series of values of x. ASSOCIATED LEGENDRE evaluates the first-associated Legendre polynomial series Σb/sub n/P/sub n/ 1 (x) at a series of values of x. CONFIDENCE calculates confidence levels for chi 2 , Gaussian, or Poisson probability distributions. TWO BODY calculates the c.m. energy, the initial- and final-state c.m. momenta, and the extreme values of t and u for a 2-body reaction. ELLIPSE calculates coordinates of points for drawing an ellipse plot showing the kinematics of a 2-body reaction or decay. DALITZ RECTANGULAR calculates coordinates of points on the boundary of a rectangular Dalitz plot. DALITZ TRIANGULAR calculates coordinates of points on the boundary of a triangular Dalitz plot. There are short versions of CONFIDENCE (EVEN N and POISSON) that calculate confidence levels for the even-degree-of-freedom-chi 2 and the Poisson cases, and there is a short version of TWO BODY (CM) that calculates just the c.m. energy and initial-state momentum. The programs are written for the HP-97 calculator

  9. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  10. Program for the surface muon spectra calculation

    International Nuclear Information System (INIS)

    Arkatov, Yu.M.; Voloshchuk, V.I.; Zolenko, V.A.; Prokhorets, I.M.; Soldatov, S.A.

    1987-01-01

    Program for the ''surface'' muon spectrum calculation is described. The algorithm is based on simulation of coordinates of π-meson birth point and direction of its escape from meson-forming target (MFT) according to angular distribution with the use of Monte Carlo method. Ionization losses of π-(μ)-mesons in the target are taken into account in the program. Calculation of ''surface'' muon spectrum is performed in the range of electron energies from 150 MeV up to 1000 MeV. Spectra of π-mesons are calculated with account of ionization losses in the target and without it. Distributions over lengths of π-meson paths in MFT and contribution of separate sections of the target to pion flux at the outlet of meson channel are calculated as well. Meson-forming target for calculation can be made of any material. The program provides for the use of the MFT itself in the form of photon converter or photon converter is located in front of the target. The program is composed of 13 subprograms; 2 of them represent generators of pseudorandom numbers, distributed uniformly in the range from 0 up to 1, and numbers with Gauss distribution. Example of calculation for copper target of 3 cm length, electron beam current-1 μA, energy-300 MeV is presented

  11. Calculator Programming Engages Visual and Kinesthetic Learners

    Science.gov (United States)

    Tabor, Catherine

    2014-01-01

    Inclusion and differentiation--hallmarks of the current educational system--require a paradigm shift in the way that educators run their classrooms. This article enumerates the need for techno-kinesthetic, visually based activities and offers an example of a calculator-based programming activity that addresses that need. After discussing the use…

  12. Simple Calculation Programs for Biology Other Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Other Methods. Hemolytic potency of drugs. Raghava et al., (1994) Biotechniques 17: 1148. FPMAP: methods for classification and identification of microorganisms 16SrRNA. graphical display of restriction and fragment map of ...

  13. Simple Calculation Programs for Biology Immunological Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...

  14. GENMOD - A program for internal dosimetry calculations

    International Nuclear Information System (INIS)

    Dunford, D.W.; Johnson, J.R.

    1987-12-01

    The computer code GENMOD was created to calculate the retention and excretion, and the integrated retention for selected radionuclides under a variety of exposure conditions. Since the creation of GENMOD new models have been developed and interfaced to GENMOD. This report describes the models now included in GENMOD, the dosimetry factors database, and gives a brief description of the GENMOD program

  15. Calculation program development for spinning reserve

    International Nuclear Information System (INIS)

    1979-01-01

    This study is about optimal holding of spinning reserve and optimal operation for it. It deals with the purpose and contents of the study, introduction of the spinning reserve electricity, speciality of the spinning reserve power, the result of calculation, analysis for limited method of optimum load, calculation of requirement for spinning reserve, analysis on measurement of system stability with summary, purpose of the analysis, cause of impact of the accident, basics on measurement of spinning reserve and conclusion. It has the reference on explanation for design of spinning reserve power program and using and trend about spinning reserve power in Korea.

  16. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  17. Brine Sampling and Evaluation Program

    International Nuclear Information System (INIS)

    Deal, D.E.; Case, J.B.; Deshler, R.M.; Drez, P.E.; Myers, J.; Tyburski, J.R.

    1987-12-01

    The Brine Sampling and Evaluation Program (BSEP) Phase II Report is an interim report which updates the data released in the BSEP Phase I Report. Direct measurements and observations of the brine that seeps into the WIPP repository excavations were continued through the period between August 1986 and July 1987. That data is included in Appendix A, which extends the observation period for some locations to approximately 900 days. Brine observations at 87 locations are presented in this report. Although WIPP underground workings are considered ''dry,'' small amounts of brine are present. Part of that brine migrates into the repository in response to pressure gradients at essentially isothermal conditions. The data presented in this report is a continuation of moisture content studies of the WIPP facility horizon that were initiated in 1982, as soon as underground drifts began to be excavated. Brine seepages are manifested by salt efflorescences, moist areas, and fluid accumulations in drillholes. 35 refs., 6 figs., 11 tabs

  18. Dose Rate Calculations for Rotary Mode Core Sampling Exhauster

    CERN Document Server

    Foust, D J

    2000-01-01

    This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering.

  19. Dose Rate Calculations for Rotary Mode Core Sampling Exhauster

    International Nuclear Information System (INIS)

    FOUST, D.J.

    2000-01-01

    This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering

  20. How to Deal with FFT Sampling Influences on ADEV Calculations

    National Research Council Canada - National Science Library

    Chang, Po-Cheng

    2007-01-01

    ...) values while the numerical integration is used for the time and frequency (T&F) conversion. These ADEV errors occur because parts of the FFT sampling have no contributions to the ADEV calculation...

  1. TRU waste-sampling program

    International Nuclear Information System (INIS)

    Warren, J.L.; Zerwekh, A.

    1985-08-01

    As part of a TRU waste-sampling program, Los Alamos National Laboratory retrieved and examined 44 drums of 238 Pu- and 239 Pu-contaminated waste. The drums ranged in age from 8 months to 9 years. The majority of drums were tested for pressure, and gas samples withdrawn from the drums were analyzed by a mass spectrometer. Real-time radiography and visual examination were used to determine both void volumes and waste content. Drum walls were measured for deterioration, and selected drum contents were reassayed for comparison with original assays and WIPP criteria. Each drum tested at atmospheric pressure. Mass spectrometry revealed no problem with 239 Pu-contaminated waste, but three 8-month-old drums of 238 Pu-contaminated waste contained a potentially hazardous gas mixture. Void volumes fell within the 81 to 97% range. Measurements of drum walls showed no significant corrosion or deterioration. All reassayed contents were within WIPP waste acceptance criteria. Five of the drums opened and examined (15%) could not be certified as packaged. Three contained free liquids, one had corrosive materials, and one had too much unstabilized particulate. Eleven drums had the wrong (or not the most appropriate) waste code. In many cases, disposal volumes had been inefficiently used. 2 refs., 23 figs., 7 tabs

  2. Data calculation program for RELAP 5 code

    International Nuclear Information System (INIS)

    Silvestre, Larissa J.B.; Sabundjian, Gaiane

    2015-01-01

    As the criteria and requirements for a nuclear power plant are extremely rigid, computer programs for simulation and safety analysis are required for certifying and licensing a plant. Based on this scenario, some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors. A major difficulty in the simulation using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. The preparation of the input data leads to a very large number of mathematical operations for calculating the geometry of the components. Therefore, a mathematical friendly preprocessor was developed in order to perform these calculations and prepare RELAP5 input data. The Visual Basic for Application (VBA) combined with Microsoft EXCEL demonstrated to be an efficient tool to perform a number of tasks in the development of the program. Due to the absence of necessary information about some RELAP5 components, this work aims to make improvements to the Mathematic Preprocessor for RELAP5 code (PREREL5). For the new version of the preprocessor, new screens of some components that were not programmed in the original version were designed; moreover, screens of pre-existing components were redesigned to improve the program. In addition, an English version was provided for the new version of the PREREL5. The new design of PREREL5 contributes for saving time and minimizing mistakes made by users of the RELAP5 code. The final version of this preprocessor will be applied to Angra 2. (author)

  3. Data calculation program for RELAP 5 code

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre, Larissa J.B.; Sabundjian, Gaiane, E-mail: larissajbs@usp.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    As the criteria and requirements for a nuclear power plant are extremely rigid, computer programs for simulation and safety analysis are required for certifying and licensing a plant. Based on this scenario, some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors. A major difficulty in the simulation using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. The preparation of the input data leads to a very large number of mathematical operations for calculating the geometry of the components. Therefore, a mathematical friendly preprocessor was developed in order to perform these calculations and prepare RELAP5 input data. The Visual Basic for Application (VBA) combined with Microsoft EXCEL demonstrated to be an efficient tool to perform a number of tasks in the development of the program. Due to the absence of necessary information about some RELAP5 components, this work aims to make improvements to the Mathematic Preprocessor for RELAP5 code (PREREL5). For the new version of the preprocessor, new screens of some components that were not programmed in the original version were designed; moreover, screens of pre-existing components were redesigned to improve the program. In addition, an English version was provided for the new version of the PREREL5. The new design of PREREL5 contributes for saving time and minimizing mistakes made by users of the RELAP5 code. The final version of this preprocessor will be applied to Angra 2. (author)

  4. An adaptive sampling scheme for deep-penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Ji, Zhicheng; Pei, Lucheng

    2013-01-01

    As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree

  5. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  6. HTC Experimental Program: Validation and Calculational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fernex, F.; Ivanova, T.; Bernard, F.; Letang, E. [Inst Radioprotect and Surete Nucl, F-92262 Fontenay Aux Roses (France); Fouillaud, P. [CEA Valduc, Serv Rech Neutron and Critcite, 21 - Is-sur-Tille (France); Thro, J. F. [AREVA NC, F-78000 Versailles (France)

    2009-05-15

    In the 1980's a series of the Haut Taux de Combustion (HTC) critical experiments with fuel pins in a water-moderated lattice was conducted at the Apparatus B experimental facility in Valduc (Commissariat a I'Energie Atomique, France) with the support of the Institut de Radioprotection et de Surete Nucleaire and AREVA NC. Four series of experiments were designed to assess profit associated with actinide-only burnup credit in the criticality safety evaluation for fuel handling, pool storage, and spent-fuel cask conditions. The HTC rods, specifically fabricated for the experiments, simulated typical pressurized water reactor uranium oxide spent fuel that had an initial enrichment of 4. 5 wt% {sup 235}U and was burned to 37.5 GWd/tonne U. The configurations have been modeled with the CRISTAL criticality package and SCALE 5.1 code system. Sensitivity/uncertainty analysis has been employed to evaluate the HTC experiments and to study their applicability for validation of burnup credit calculations. This paper presents the experimental program, the principal results of the experiment evaluation, and modeling. The HTC data applicability to burnup credit validation is demonstrated with an example of spent-fuel storage models. (authors)

  7. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Means and method of sampling flow related variables from a waterway in an accurate manner using a programmable calculator

    Science.gov (United States)

    Rand E. Eads; Mark R. Boolootian; Steven C. [Inventors] Hankin

    1987-01-01

    Abstract - A programmable calculator is connected to a pumping sampler by an interface circuit board. The calculator has a sediment sampling program stored therein and includes a timer to periodically wake up the calculator. Sediment collection is controlled by a Selection At List Time (SALT) scheme in which the probability of taking a sample is proportional to its...

  9. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  10. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices

    International Nuclear Information System (INIS)

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-01-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan

  11. Elementary function calculation programs for the central processor-6

    International Nuclear Information System (INIS)

    Dobrolyubov, L.V.; Ovcharenko, G.A.; Potapova, V.A.

    1976-01-01

    Subprograms of elementary functions calculations are given for the central processor (CP AS-6). A procedure is described to obtain calculated formulae which represent the elementary functions as a polynomial. Standard programs for random numbers are considered. All the programs described are based upon the algorithms of respective programs for BESM computer

  12. NASA Lunar and Meteorite Sample Disk Program

    Science.gov (United States)

    Foxworth, Suzanne

    2017-01-01

    The Lunar and Meteorite Sample Disk Program is designed for K-12 classroom educators who work in K-12 schools, museums, libraries, or planetariums. Educators have to be certified to borrow the Lunar and Meteorite Sample Disks by attending a NASA Certification Workshop provided by a NASA Authorized Sample Disk Certifier.

  13. Calculation of the effective D-d neutron energy distribution incident on a cylindrical shell sample

    International Nuclear Information System (INIS)

    Gotoh, Hiroshi

    1977-07-01

    A method is proposed to calculate the effective energy distribution of neutrons incident on a cylindrical shell sample placed perpendicularly to the direction of the deuteron beam bombarding a deuterium metal target. The Monte Carlo method is used and the Fortran program is contained. (auth.)

  14. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  15. SCRAM: a program for calculating scram times

    International Nuclear Information System (INIS)

    Bourquin, R.D.; Birney, K.R.

    1975-01-01

    Prediction of scram times is one facet of design analysis for control rod assemblies. Parameters for the entire control rod sub-system must be considered in such analyses and experimental verification is used when it is available. The SCRAM computer program was developed for design analysis of improved control rod assemblies for the Fast Flux Test Facility (FFTF). A description of the evolution of the program from basic equations to a functional design analysis tool is presented

  16. HP-67 calculator programs for thermodynamic data and phase diagram calculations

    International Nuclear Information System (INIS)

    Brewer, L.

    1978-01-01

    This report is a supplement to a tabulation of the thermodynamic and phase data for the 100 binary systems of Mo with the elements from H to Lr. The calculations of thermodynamic data and phase equilibria were carried out from 5000 0 K to low temperatures. This report presents the methods of calculation used. The thermodynamics involved is rather straightforward and the reader is referred to any advanced thermodynamic text. The calculations were largely carried out using an HP-65 programmable calculator. In this report, those programs are reformulated for use with the HP-67 calculator; great reduction in the number of programs required to carry out the calculation results

  17. A semi-empirical approach to calculate gamma activities in environmental samples

    International Nuclear Information System (INIS)

    Palacios, D.; Barros, H.; Alfonso, J.; Perez, K.; Trujillo, M.; Losada, M.

    2006-01-01

    We propose a semi-empirical method to calculate radionuclide concentrations in environmental samples without the use of reference material and avoiding the typical complexity of Monte-Carlo codes. The calculation of total efficiencies was carried out from a relative efficiency curve (obtained from the gamma spectra data), and the geometric (simulated by Monte-Carlo), absorption, sample and intrinsic efficiencies at energies between 130 and 3000 keV. The absorption and sample efficiencies were determined from the mass absorption coefficients, obtained by the web program XCOM. Deviations between computed results and measured efficiencies for the RGTh-1 reference material are mostly within 10%. Radionuclide activities in marine sediment samples calculated by the proposed method and by the experimental relative method were in satisfactory agreement. The developed method can be used for routine environmental monitoring when efficiency uncertainties of 10% can be sufficient.(Author)

  18. Computer program for calculation of ideal gas thermodynamic data

    Science.gov (United States)

    Gordon, S.; Mc Bride, B. J.

    1968-01-01

    Computer program calculates ideal gas thermodynamic properties for any species for which molecular constant data is available. Partial functions and derivatives from formulas based on statistical mechanics are provided by the program which is written in FORTRAN 4 and MAP.

  19. FORTRAN programs for transient eddy current calculations using a perturbation-polynomial expansion technique

    International Nuclear Information System (INIS)

    Carpenter, K.H.

    1976-11-01

    A description is given of FORTRAN programs for transient eddy current calculations in thin, non-magnetic conductors using a perturbation-polynomial expansion technique. Basic equations are presented as well as flow charts for the programs implementing them. The implementation is in two steps--a batch program to produce an intermediate data file and interactive programs to produce graphical output. FORTRAN source listings are included for all program elements, and sample inputs and outputs are given for the major programs

  20. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  1. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  2. Finite difference program for calculating hydride bed wall temperature profiles

    International Nuclear Information System (INIS)

    Klein, J.E.

    1992-01-01

    A QuickBASIC finite difference program was written for calculating one dimensional temperature profiles in up to two media with flat, cylindrical, or spherical geometries. The development of the program was motivated by the need to calculate maximum temperature differences across the walls of the Tritium metal hydrides beds for thermal fatigue analysis. The purpose of this report is to document the equations and the computer program used to calculate transient wall temperatures in stainless steel hydride vessels. The development of the computer code was motivated by the need to calculate maximum temperature differences across the walls of the hydrides beds in the Tritium Facility for thermal fatigue analysis

  3. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  4. Newnes circuit calculations pocket book with computer programs

    CERN Document Server

    Davies, Thomas J

    2013-01-01

    Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.

  5. Building an IDE for the Calculational Derivation of Imperative Programs

    Directory of Open Access Journals (Sweden)

    Dipak L. Chaudhari

    2015-08-01

    Full Text Available In this paper, we describe an IDE called CAPS (Calculational Assistant for Programming from Specifications for the interactive, calculational derivation of imperative programs. In building CAPS, our aim has been to make the IDE accessible to non-experts while retaining the overall flavor of the pen-and-paper calculational style. We discuss the overall architecture of the CAPS system, the main features of the IDE, the GUI design, and the trade-offs involved.

  6. Controlled sample program publication No. 1: characterization of rock samples

    International Nuclear Information System (INIS)

    Ames, L.L.

    1978-10-01

    A description is presented of the methodology used and the geologic parameters measured on several rocks which are being used in round-robin laboratory and nuclide adsorption methodology experiments. Presently investigators from various laboratories are determining nuclide distribution coefficients utilizing numerous experimental techniques. Unfortunately, it appears that often the resultant data are dependent not only on the type of groundwater and rock utilized, but also on the experimentor or method used. The Controlled Sample Program is a WISAP (Waste Isolation Safety Assessment Program) attempt to resolve the apparent method and dependencies and to identify individual experimenter's bias. The rock samples characterized in an interlaboratory Kd methodology comparison program include Westerly granite, Argillaceous shale, Oolitic limestone, Sentinel Gap basalt, Conasauga shale, Climax Stock granite, anhydrite, Magenta dolomite and Culebra dolomite. Techniques used in the characterization include whole rock chemical analysis, X-ray diffraction, optical examination, electron microprobe elemental mapping, and chemical analysis of specific mineral phases. Surface areas were determined by the B.E.T. and ethylene glycol sorption methods. Cation exchange capacities were determined with 85 Sr, but were of questionable value for the high calcium rocks. A quantitative mineralogy was also estimated for each rock. Characteristics which have the potential of strongly affecting radionuclide Kd values such as the presence of sulfides, water-soluble, pH-buffering carbonates, glass, and ferrous iron were listed for each rock sample

  7. A nonproprietary, nonsecret program for calculating Stirling cryocoolers

    Science.gov (United States)

    Martini, W. R.

    1985-01-01

    A design program for an integrated Stirling cycle cryocooler was written on an IBM-PC computer. The program is easy to use and shows the trends and itemizes the losses. The calculated results were compared with some measured performance values. The program predicts somewhat optimistic performance and needs to be calibrated more with experimental measurements. Adding a multiplier to the friction factor can bring the calculated rsults in line with the limited test results so far available. The program is offered as a good framework on which to build a truly useful design program for all types of cryocoolers.

  8. Isochronous cyclotron closed equilibrium orbit calculation program description

    International Nuclear Information System (INIS)

    Kiyan, I.N.; Vorozhtsov, S.B.; Tarashkevich, R.

    2003-01-01

    The Equilibrium Orbit Research Program - EORP, written in C++ with the use of Visual C++ is described. The program is intended for the calculation of the particle rotation frequency and particle kinetic energy in the closed equilibrium orbits of an isochronous cyclotron, where the closed equilibrium orbits are described through the radius and particle momentum angle: r eo (θ) and φ p (θ). The program algorithm was developed on the basis of articles, lecture notes and original analytic calculations. The results of calculations by the EORP were checked and confirmed by using the results of calculations by the numerical methods. The discrepancies between the EORP results and the numerical method results for the calculations of the particle rotation frequency and particle kinetic energy are within the limits of ±1·10 -4 . The EORP results and the numerical method results for the calculations of r eo (θ) and φ p (θ) practically coincide. All this proves the accuracy of calculations by the EORP for the isochronous cyclotrons with the azimuthally varied fields. As is evident from the results of calculations, the program can be used for the calculations of both straight - sector and spiral-sector isochronous cyclotrons. (author)

  9. Wilsonville wastewater sampling program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1983-10-01

    As part of its contrast to design, build and operate the SRC-1 Demonstration Plant in cooperation with the US Department of Energy (DOE), International Coal Refining Company (ICRC) was required to collect and evaluate data related to wastewater streams and wastewater treatment procedures at the SRC-1 Pilot Plant facility. The pilot plant is located at Wilsonville, Alabama and is operated by Catalytic, Inc. under the direction of Southern Company Services. The plant is funded in part by the Electric Power Research Institute and the DOE. ICRC contracted with Catalytic, Inc. to conduct wastewater sampling. Tasks 1 through 5 included sampling and analysis of various wastewater sources and points of different steps in the biological treatment facility at the plant. The sampling program ran from May 1 to July 31, 1982. Also included in the sampling program was the generation and analysis of leachate from SRC product using standard laboratory leaching procedures. For Task 6, available plant wastewater data covering the period from February 1978 to December 1981 was analyzed to gain information that might be useful for a demonstration plant design basis. This report contains a tabulation of the analytical data, a summary tabulation of the historical operating data that was evaluated and comments concerning the data. The procedures used during the sampling program are also documented.

  10. NLOM - a program for nonlocal optical model calculations

    International Nuclear Information System (INIS)

    Kim, B.T.; Kyum, M.C.; Hong, S.W.; Park, M.H.; Udagawa, T.

    1992-01-01

    A FORTRAN program NLOM for nonlocal optical model calculations is described. It is based on a method recently developed by Kim and Udagawa, which utilizes the Lanczos technique for solving integral equations derived from the nonlocal Schroedinger equation. (orig.)

  11. The WIPP Water Quality Sampling Program

    International Nuclear Information System (INIS)

    Uhland, D.; Morse, J.G.; Colton, D.

    1986-01-01

    The Waste Isolation Pilot Plant (WIPP), a Department of Energy facility, will be used for the underground disposal of wastes. The Water Quality Sampling Program (WQSP) is designed to obtain representative and reproducible water samples to depict accurate water composition data for characterization and monitoring programs in the vicinity of the WIPP. The WQSP is designed to input data into four major programs for the WIPP project: Geochemical Site Characterization, Radiological Baseline, Environmental Baseline, and Performance Assessment. The water-bearing units of interest are the Culebra and Magneta Dolomite Members of the Rustler Formation, units in the Dewey Lake Redbeds, and the Bell Canyon Formation. At least two chemically distinct types of water occur in the Culebra, one being a sodium/potassium chloride water and the other being a calcium/magnesium sulfate water. Water from the Culebra wells to the south of the WIPP site is distinctly fresher and tends to be of the calcium/magnesium sulfate type. Water in the Culebra in the north and around the WIPP site is distinctly fresher and tends to be of the sodium/potassium chloride type and is much higher in total dissolved solids. The program, which is currently 1 year old, will continue throughout the life of the facility as part of the Environmental Monitoring Program

  12. Calculation of cosmic ray induced single event upsets: Program CRUP (Cosmic Ray Upset Program)

    Science.gov (United States)

    Shapiro, P.

    1983-09-01

    This report documents PROGRAM CRUP, COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic ray induced single-event error rates in microelectronic circuits exposed to several representative cosmic-ray environments.

  13. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  14. Simple Calculation Programs for Biology Methods in Molecular ...

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Methods in Molecular Biology. GMAP: A program for mapping potential restriction sites. RE sites in ambiguous and non-ambiguous DNA sequence; Minimum number of silent mutations required for introducing a RE sites; Set ...

  15. TableSim--A program for analysis of small-sample categorical data.

    Science.gov (United States)

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  16. Calculations in cytogenetic dosimetry by means of the dosgen program

    International Nuclear Information System (INIS)

    Garcia Lima, O.; Zerquera, J.T.

    1996-01-01

    The DOSGEN program sums up the different calculations routing that are more often used in cytogenetic dosimetry. It can be implemented in a compatible IBM PC by cytogenetic experts having a basic knowledge of computing. The programs has been successfully applied using experimental data and its advantages have been acknowledge by Latin American and Asian Laboratories dealing with this medical branch. The program is written in Pascal Language and requires 42 K bytes

  17. Injection Molding Parameters Calculations by Using Visual Basic (VB) Programming

    Science.gov (United States)

    Tony, B. Jain A. R.; Karthikeyen, S.; Alex, B. Jeslin A. R.; Hasan, Z. Jahid Ali

    2018-03-01

    Now a day’s manufacturing industry plays a vital role in production sectors. To fabricate a component lot of design calculation has to be done. There is a chance of human errors occurs during design calculations. The aim of this project is to create a special module using visual basic (VB) programming to calculate injection molding parameters to avoid human errors. To create an injection mold for a spur gear component the following parameters have to be calculated such as Cooling Capacity, Cooling Channel Diameter, and Cooling Channel Length, Runner Length and Runner Diameter, Gate Diameter and Gate Pressure. To calculate the above injection molding parameters a separate module has been created using Visual Basic (VB) Programming to reduce the human errors. The outcome of the module dimensions is the injection molding components such as mold cavity and core design, ejector plate design.

  18. Method and program for complex calculation of heterogeneous reactor

    International Nuclear Information System (INIS)

    Kalashnikov, A.G.; Glebov, A.P.; Elovskaya, L.F.; Kuznetsova, L.I.

    1988-01-01

    An algorithm and the GITA program for complex one-dimensional calculation of a heterogeneous reactor which permits to conduct calculations for the reactor and its cell simultaneously using the same algorithm are described. Multigroup macrocross sections for reactor zones in the thermal energy range are determined according to the technique for calculating a cell with complicate structure and then the continuous multi group calculation of the reactor in the thermal energy range and in the range of neutron thermalization is made. The kinetic equation is solved using the Pi- and DSn- approximations [fr

  19. Radiation damage calculations for the APT materials test program

    International Nuclear Information System (INIS)

    Corzine, R.K.; Wechsler, M.S.; Dudziak, D.J.; Ferguson, P.D.; James, M.R.

    1999-01-01

    A materials irradiation was performed at the Los Alamos Neutron Science Center (LANSCE) in the fall of 1996 and spring of 1997 in support of the Accelerator Production of Tritium (APT) program. Testing of the irradiated materials is underway. In the proposed APT design, materials in the target and blanket are to be exposed to protons and neutrons over a wide range of energies. The irradiation and testing program was undertaken to enlarge the very limited direct knowledge presently available of the effects of medium-energy protons (∼1 GeV) on the properties of engineering materials. APT candidate materials were placed in or near the LANSCE accelerator 800-MeV, 1-mA proton beam and received roughly the same proton current density in the center of the beam as would be the case for the APT facility. As a result, the proton fluences achieved in the irradiation were expected to approach the APT prototypic full-power-year values. To predict accurately the performance of materials in APT, radiation damage parameters for the materials experiment must be determined. By modeling the experiment, calculations for atomic displacement, helium and hydrogen cross sections and for proton and neutron fluences were done for representative samples in the 17A, 18A, and 18C areas. The LAHET code system (LCS) was used to model the irradiation program, LAHET 2.82 within LCS transports protons > 1 MeV, and neutrons >20 MeV. A modified version of MCNP for use in LCS, HMCNP 4A, was employed to tally neutrons of energies <20 MeV

  20. Composition calculations by the KARATE code system for the spent-fuel samples from the Novovoronezh reactor

    International Nuclear Information System (INIS)

    Hordosy, G.

    2006-01-01

    KARATE is a code system developed in KFKI AERI. It is routinely used for core calculation. Its depletion module are now tested against the radiochemical measurements of spent fuel samples from the Novovoronezh Unit IV, performed in RIAR, Dimitrovgrad. Due to the insufficient knowledge of operational history of the unit, the irradiation history of the samples was taken from formerly published Russian calculations. The calculation of isotopic composition was performed by the MULTICEL module of program system. The agreement between the calculated and measured values of the concentration of the most important actinides and fission products is investigated (Authors)

  1. Inventory calculations in sediment samples with heterogeneous plutonium activity distribution

    International Nuclear Information System (INIS)

    Eriksson, M.; Dahlgaard, H.

    2002-01-01

    A method to determine the total inventory of a heterogeneously distributed contamination of marine sediments is described. The study site is the Bylot Sound off the Thule Airbase, NW Greenland, where marine sediments became contaminated with plutonium in 1968 after a nuclear weapons accident. The calculation is based on a gamma spectrometric screening of the 241 Am concentration in 450 one-gram aliquots from 6 sediment cores. A Monte Carlo programme then simulates a probable distribution of the activity, and based on that, a total inventory is estimated by integrating a double exponential function. The present data indicate a total inventory around 3.5 kg, which is 7 times higher than earlier estimates (0.5 kg). The difference is partly explained by the inclusion of hot particles in the present calculation. A large uncertainty is connected to this estimate, and it should be regarded as preliminary. (au)

  2. TRING: a computer program for calculating radionuclide transport in groundwater

    International Nuclear Information System (INIS)

    Maul, P.R.

    1984-12-01

    The computer program TRING is described which enables the transport of radionuclides in groundwater to be calculated for use in long term radiological assessments using methods described previously. Examples of the areas of application of the program are activity transport in groundwater associated with accidental spillage or leakage of activity, the shutdown of reactors subject to delayed decommissioning, shallow land burial of intermediate level waste and geologic disposal of high level waste. Some examples of the use of the program are given, together with full details to enable users to run the program. (author)

  3. MONO: A program to calculate synchrotron beamline monochromator throughputs

    International Nuclear Information System (INIS)

    Chapman, D.

    1989-01-01

    A set of Fortran programs have been developed to calculate the expected throughput of x-ray monochromators with a filtered synchrotron source and is applicable to bending magnet and wiggler beamlines. These programs calculate the normalized throughput and filtered synchrotron spectrum passed by multiple element, flat un- focussed monochromator crystals of the Bragg or Laue type as a function of incident beam divergence, energy and polarization. The reflected and transmitted beam of each crystal is calculated using the dynamical theory of diffraction. Multiple crystal arrangements in the dispersive and non-dispersive mode are allowed as well as crystal asymmetry and energy or angle offsets. Filters or windows of arbitrary elemental composition may be used to filter the incident synchrotron beam. This program should be useful to predict the intensities available from many beamline configurations as well as assist in the design of new monochromator and analyzer systems. 6 refs., 3 figs

  4. Computer program 'TRIO' for third order calculation of ion trajectory

    International Nuclear Information System (INIS)

    Matsuo, Takekiyo; Matsuda, Hisashi; Fujita, Yoshitaka; Wollnik, H.

    1976-01-01

    A computer program for the calculation of ion trajectory is described. This program ''TRIO'' (Third Order Ion Optics) is applicable to any ion optical system consisting of drift spaces, cylindrical or toroidal electric sector fields, homogeneous or inhomogeneous magnetic sector fields, magnetic and electrostatic Q-lenses. The influence of the fringing field is taken into consideration. A special device is introduced to the method of matrix multiplication to shorten the calculation time and the required time proves to be about 40 times shorter than the ordinary method as a result. The trajectory calculation is possible to execute with accuracy up to third order. Any one of three dispersion bases, momentum, energy, mass and energy, is possible to be selected. Full LIST of the computer program and an example are given. (auth.)

  5. A Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Parrinello, Michele

    2015-03-01

    The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.

  6. Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  7. Helical tomotherapy shielding calculation for an existing LINAC treatment room: sample calculation and cautions

    International Nuclear Information System (INIS)

    Wu Chuan; Guo Fanqing; Purdy, James A

    2006-01-01

    This paper reports a step-by-step shielding calculation recipe for a helical tomotherapy unit (TomoTherapy Inc., Madison, WI, USA), recently installed in an existing Varian 600C treatment room. Both primary and secondary radiations (leakage and scatter) are explicitly considered. A typical patient load is assumed. Use factor is calculated based on an analytical formula derived from the tomotherapy rotational beam delivery geometry. Leakage and scatter are included in the calculation based on corresponding measurement data as documented by TomoTherapy Inc. Our calculation result shows that, except for a small area by the therapists' console, most of the existing Varian 600C shielding is sufficient for the new tomotherapy unit. This work cautions other institutions facing the similar situation, where an HT unit is considered for an existing LINAC treatment room, more secondary shielding might be considered at some locations, due to the significantly increased secondary shielding requirement by HT. (note)

  8. Blow.MOD2: a program for blowdown transient calculations

    International Nuclear Information System (INIS)

    Doval, A.

    1990-01-01

    The BLOW.MOD2 program has been developed to calculate the blowdown phase in a pressurized vessel after a break/valve is opened. It is a one volume model where break height and flow area are specified. Moody critical flow model was adopted under saturation conditions for flow calculation through the break. Heat transfer from structures and internals have been taken into account. Long term depressurization results and a more complex model are compared satisfactorily. (Author)

  9. 7 CFR 51.308 - Methods of sampling and calculation of percentages.

    Science.gov (United States)

    2010-01-01

    ..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the numerical... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51...

  10. 40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Calculations II... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. II Appendix II to Part 600—Sample Fuel Economy Calculations (a) This sample fuel economy calculation is applicable to...

  11. Mass: Fortran program for calculating mass-absorption coefficients

    International Nuclear Information System (INIS)

    Nielsen, Aa.; Svane Petersen, T.

    1980-01-01

    Determinations of mass-absorption coefficients in the x-ray analysis of trace elements are an important and time consuming part of the arithmetic calculation. In the course of time different metods have been used. The program MASS calculates the mass-absorption coefficients from a given major element analysis at the x-ray wavelengths normally used in trace element determinations and lists the chemical analysis and the mass-absorption coefficients. The program is coded in FORTRAN IV, and is operational on the IBM 370/165 computer, on the UNIVAC 1110 and on PDP 11/05. (author)

  12. Isochronous Cyclotron Closed Equilibrium Orbit Calculation Program Description

    CERN Document Server

    Kian, I N; Tarashkevich, R

    2003-01-01

    The Equilibrium Orbit Research Program - EORP, written in C++ with the use of Visual C++ is described. The program is intended for the calculation of the particle rotation frequency and particle kinetic energy in the closed equilibrium orbits of an isochronous cyclotron, where the closed equilibrium orbits are described through the radius and particle momentum angle: r_{eo}(\\theta) and \\varphi_{p}(\\theta). The program algorithm was developed on the basis of articles, lecture notes and original analytic calculations. The results of calculations by the EORP were checked and confirmed by using the results of calculations by the numerical methods. The discrepancies between the EORP results and the numerical method results for the calculations of the particle rotation frequency and particle kinetic energy are within the limits of \\pm1\\cdot10^{-4}. The EORP results and the numerical method results for the calculations of r_{eo}(\\theta) and \\varphi_{p}(\\theta) practically coincide. All this proves the accuracy of ca...

  13. MP.EXE Microphone pressure sensitivity calibration calculation program

    DEFF Research Database (Denmark)

    Rasmussen, Knud

    1999-01-01

    MP.EXE is a program which calculates the pressure sensitivity of LS1 microphones as defined in IEC 61094-1, based on measurement results performed as laid down in IEC 61094-2.A very early program was developed and written by K. Rasmussen. The code of the present heavily extended version is writte...... by E.S. Olsen.The present manual is written by K.Rasmussen and E.S. Olsen....

  14. GRUCAL, a computer program for calculating macroscopic group constants

    International Nuclear Information System (INIS)

    Woll, D.

    1975-06-01

    Nuclear reactor calculations require material- and composition-dependent, energy averaged nuclear data to describe the interaction of neutrons with individual isotopes in material compositions of reactor zones. The code GRUCAL calculates these macroscopic group constants for given compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but will be read at the actual execution time from a separate instruction file. This allows to accomodate GRUCAL to various problems or different group constant concepts. (orig.) [de

  15. Improvement of correlated sampling Monte Carlo methods for reactivity calculations

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Asaoka, Takumi

    1978-01-01

    Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)

  16. Brine Sampling and Evaluation Program, 1991 report

    Energy Technology Data Exchange (ETDEWEB)

    Deal, D.E.; Abitz, R.J.; Myers, J.; Martin, M.L.; Milligan, D.J.; Sobocinski, R.W.; Lipponer, P.P.J. [International Technology Corp., Albuquerque, NM (United States); Belski, D.S. [Westinghouse Electric Corp., Carlsbad, NM (United States). Waste Isolation Div.

    1993-09-01

    The data presented in this report are the result of Brine Sampling and Evaluation Program (BSEP) activities at the Waste Isolation Pilot Plan (WIPP) during 1991. These BSEP activities document and investigate the origins, hydraulic characteristics, extent, and composition of brine occurrences in the Permian Salado Formation and seepage of that brine into the excavations at the WIPP. When excavations began at the WIPP in 1982, small brine seepages (weeps) were observed on the walls. Brine studies began as part of the Site Validation Program and were formalized as a program in its own right in 1985. During nine years of observations (1982--1991), evidence has mounted that the amount of brine seeping into the WIPP excavations is limited, local, and only a small fraction of that required to produce hydrogen gas by corroding the metal in the waste drums and waste inventory. The data through 1990 is discussed in detail and summarized by Deal and others (1991). The data presented in this report describes progress made during the calendar year 1991 and focuses on four major areas: (1) quantification of the amount of brine seeping across vertical surfaces in the WIPP excavations (brine ``weeps); (2) monitoring of brine inflow, e.g., measuring brines recovered from holes drilled downward from the underground drifts (downholes), upward from the underground drifts (upholes), and from subhorizontal holes; (3) further characterization of brine geochemistry; and (4) preliminary quantification of the amount of brine that might be released by squeezing the underconsolidated clays present in the Salado Formation.

  17. Computer Programs for Calculating and Plotting the Stability Characteristics of a Balloon Tethered in a Wind

    Science.gov (United States)

    Bennett, R. M.; Bland, S. R.; Redd, L. T.

    1973-01-01

    Computer programs for calculating the stability characteristics of a balloon tethered in a steady wind are presented. Equilibrium conditions, characteristic roots, and modal ratios are calculated for a range of discrete values of velocity for a fixed tether-line length. Separate programs are used: (1) to calculate longitudinal stability characteristics, (2) to calculate lateral stability characteristics, (3) to plot the characteristic roots versus velocity, (4) to plot the characteristic roots in root-locus form, (5) to plot the longitudinal modes of motion, and (6) to plot the lateral modes for motion. The basic equations, program listings, and the input and output data for sample cases are presented, with a brief discussion of the overall operation and limitations. The programs are based on a linearized, stability-derivative type of analysis, including balloon aerodynamics, apparent mass, buoyancy effects, and static forces which result from the tether line.

  18. Programmable calculator programs to solve softwood volume and value equations.

    Science.gov (United States)

    Janet K. Ayer. Sachet

    1982-01-01

    This paper presents product value and product volume equations as programs for handheld calculators. These tree equations are for inland Douglas-fir, young-growth Douglas-fir, western white pine, ponderosa pine, and western larch. Operating instructions and an example are included.

  19. RADSHI: shielding calculation program for different geometries sources

    International Nuclear Information System (INIS)

    Gelen, A.; Alvarez, I.; Lopez, H.; Manso, M.

    1996-01-01

    A computer code written in pascal language for IBM/Pc is described. The program calculates the optimum thickness of slab shield for different geometries sources. The Point Kernel Method is employed, which enables the obtention of the ionizing radiation flux density. The calculation takes into account the possibility of self-absorption in the source. The air kerma rate for gamma radiation is determined, and with the concept of attenuation length through the equivalent attenuation length the shield is obtained. The scattering and the exponential attenuation inside the shield material is considered in the program. The shield materials can be: concrete, water, iron or lead. It also calculates the shield for point isotropic neutron source, using as shield materials paraffin, concrete or water. (authors). 13 refs

  20. HEINBE; the calculation program for helium production in beryllium under neutron irradiation

    International Nuclear Information System (INIS)

    Shimakawa, Satoshi; Ishitsuka, Etsuo; Sato, Minoru

    1992-11-01

    HEINBE is a program on personal computer for calculating helium production in beryllium under neutron irradiation. The program can also calculate the tritium production in beryllium. Considering many nuclear reactions and their multi-step reactions, helium and tritium productions in beryllium materials irradiated at fusion reactor or fission reactor may be calculated with high accuracy. The calculation method, user's manual, calculated examples and comparison with experimental data were described. This report also describes a neutronics simulation method to generate additional data on swelling of beryllium, 3,000-15,000 appm helium range, for end-of-life of the proposed design for fusion blanket of the ITER. The calculation results indicate that helium production for beryllium sample doped lithium by 50 days irradiation in the fission reactor, such as the JMTR, could be achieved to 2,000-8,000 appm. (author)

  1. FORTRAN program for calculating liquid-phase and gas-phase thermal diffusion column coefficients

    International Nuclear Information System (INIS)

    Rutherford, W.M.

    1980-01-01

    A computer program (COLCO) was developed for calculating thermal diffusion column coefficients from theory. The program, which is written in FORTRAN IV, can be used for both liquid-phase and gas-phase thermal diffusion columns. Column coefficients for the gas phase can be based on gas properties calculated from kinetic theory using tables of omega integrals or on tables of compiled physical properties as functions of temperature. Column coefficients for the liquid phase can be based on compiled physical property tables. Program listings, test data, sample output, and users manual are supplied for appendices

  2. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  3. Brine Sampling and Evaluation Program, 1990 report

    International Nuclear Information System (INIS)

    Deal, D.E.; Abitz, R.J.; Myers, J.; Case, J.B.; Martin, M.L.; Roggenthen, W.M.; Belski, D.S.

    1991-08-01

    The data presented in this report are the result of Brine Sampling and Evaluation Program (BSEP) activities at the Waste Isolation Pilot Plant (WIPP) during 1990. When excavations began in 1982, small brine seepages (weeps) were observed on the walls. These brine occurrences were initially described as part of the Site Validation Program. Brine studies were formalized in 1985. The BSEP activities document and investigate the origins, hydraulic characteristics, extent, and composition of brine occurrences in the Permian Salado Formation and seepage of that brine into the excavations at the WIPP. The brine chemistry is important because it assists in understanding the origin of the brine and because it may affect possible chemical reactions in the buried waste after sealing the repository. The volume of brine and the hydrologic system that drives the brine seepage also need to be understood to assess the long-term performance of the repository. After more than eight years of observations (1982--1990), no credible evidence exists to indicate that enough naturally occurring brine will seep into the WIPP excavations to be of practical concern. The detailed observations and analyses summarized herein and in previous BSEP reports confirm the evidence apparent during casual visits to the underground workings -- that the excavations are remarkably dry

  4. Brine Sampling and Evaluation Program, 1990 report

    Energy Technology Data Exchange (ETDEWEB)

    Deal, D.E.; Abitz, R.J.; Myers, J.; Case, J.B.; Martin, M.L.; Roggenthen, W.M. [International Technology Corp., Albuquerque, NM (United States); Belski, D.S. [Westinghouse Electric Corp., Carlsbad, NM (United States). Waste Isolation Div.

    1991-08-01

    The data presented in this report are the result of Brine Sampling and Evaluation Program (BSEP) activities at the Waste Isolation Pilot Plant (WIPP) during 1990. When excavations began in 1982, small brine seepages (weeps) were observed on the walls. These brine occurrences were initially described as part of the Site Validation Program. Brine studies were formalized in 1985. The BSEP activities document and investigate the origins, hydraulic characteristics, extent, and composition of brine occurrences in the Permian Salado Formation and seepage of that brine into the excavations at the WIPP. The brine chemistry is important because it assists in understanding the origin of the brine and because it may affect possible chemical reactions in the buried waste after sealing the repository. The volume of brine and the hydrologic system that drives the brine seepage also need to be understood to assess the long-term performance of the repository. After more than eight years of observations (1982--1990), no credible evidence exists to indicate that enough naturally occurring brine will seep into the WIPP excavations to be of practical concern. The detailed observations and analyses summarized herein and in previous BSEP reports confirm the evidence apparent during casual visits to the underground workings -- that the excavations are remarkably dry.

  5. 46 CFR 280.11 - Example of calculation and sample report.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Example of calculation and sample report. 280.11 Section... OPERATORS § 280.11 Example of calculation and sample report. (a) Example of calculation. The provisions of this part may be illustrated by the following example: Company A operates several vessels engaged in...

  6. Programming PHREEQC calculations with C++ and Python a comparative study

    Science.gov (United States)

    Charlton, Scott R.; Parkhurst, David L.; Muller, Mike

    2011-01-01

    The new IPhreeqc module provides an application programming interface (API) to facilitate coupling of other codes with the U.S. Geological Survey geochemical model PHREEQC. Traditionally, loose coupling of PHREEQC with other applications required methods to create PHREEQC input files, start external PHREEQC processes, and process PHREEQC output files. IPhreeqc eliminates most of this effort by providing direct access to PHREEQC capabilities through a component object model (COM), a library, or a dynamically linked library (DLL). Input and calculations can be specified through internally programmed strings, and all data exchange between an application and the module can occur in computer memory. This study compares simulations programmed in C++ and Python that are tightly coupled with IPhreeqc modules to the traditional simulations that are loosely coupled to PHREEQC. The study compares performance, quantifies effort, and evaluates lines of code and the complexity of the design. The comparisons show that IPhreeqc offers a more powerful and simpler approach for incorporating PHREEQC calculations into transport models and other applications that need to perform PHREEQC calculations. The IPhreeqc module facilitates the design of coupled applications and significantly reduces run times. Even a moderate knowledge of one of the supported programming languages allows more efficient use of PHREEQC than the traditional loosely coupled approach.

  7. Validation experience with the core calculation program karate

    International Nuclear Information System (INIS)

    Hegyi, Gy.; Hordosy, G.; Kereszturi, A.; Makai, M.; Maraczy, Cs.

    1995-01-01

    A relatively fast and easy-to-handle modular code system named KARATE-440 has been elaborated for steady-state operational calculations of VVER-440 type reactors. It is built up from cell, assembly and global calculations. In the frame of the program neutron physical and thermohydraulic process of the core at normal startup, steady and slow transient can be simulated. The verification and validation of the global code have been prepared recently. The test cases include mathematical benchmark and measurements on operating VVER-440 units. Summary of the results, such as startup parameters, boron letdown curves, radial and axial power distributions of some cycles of Paks NPP is presented. (author)

  8. Development of internal dose calculation programing via food ingestion

    International Nuclear Information System (INIS)

    Kim, H. J.; Lee, W. K.; Lee, M. S.

    1998-01-01

    Most of dose for public via ingestion pathway is calculating for considering several pathways; which start from radioactive material released from a nuclear power plant to diffusion and migration. But in order to model these complicate pathways mathematically, some assumptions are essential and lots of input data related with pathways are demanded. Since there is uncertainty related with environment in these assumptions and input data, the accuracy of dose calculating result is not reliable. To reduce, therefore, these uncertain assumptions and inputs, this paper presents exposure dose calculating method using the activity of environmental sample detected in any pathway. Application of dose calculation is aim at peoples around KORI nuclear power plant and the value that is used to dose conversion factor recommended in ICRP Publ. 60

  9. Sampling of Stochastic Input Parameters for Rockfall Calculations and for Structural Response Calculations Under Vibratory Ground Motion

    International Nuclear Information System (INIS)

    M. Gross

    2004-01-01

    The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the

  10. User's Guide to Handlens - A Computer Program that Calculates the Chemistry of Minerals in Mixtures

    Science.gov (United States)

    Eberl, D.D.

    2008-01-01

    HandLens is a computer program, written in Excel macro language, that calculates the chemistry of minerals in mineral mixtures (for example, in rocks, soils and sediments) for related samples from inputs of quantitative mineralogy and chemistry. For best results, the related samples should contain minerals having the same chemical compositions; that is, the samples should differ only in the proportions of minerals present. This manual describes how to use the program, discusses the theory behind its operation, and presents test results of the program's accuracy. Required input for HandLens includes quantitative mineralogical data, obtained, for example, by RockJock analysis of X-ray diffraction (XRD) patterns, and quantitative chemical data, obtained, for example, by X-ray florescence (XRF) analysis of the same samples. Other quantitative data, such as sample depth, temperature, surface area, also can be entered. The minerals present in the samples are selected from a list, and the program is started. The results of the calculation include: (1) a table of linear coefficients of determination (r2's) which relate pairs of input data (for example, Si versus quartz weight percents); (2) a utility for plotting all input data, either as pairs of variables, or as sums of up to eight variables; (3) a table that presents the calculated chemical formulae for minerals in the samples; (4) a table that lists the calculated concentrations of major, minor, and trace elements in the various minerals; and (5) a table that presents chemical formulae for the minerals that have been corrected for possible systematic errors in the mineralogical and/or chemical analyses. In addition, the program contains a method for testing the assumption of constant chemistry of the minerals within a sample set.

  11. Brine Sampling and Evaluation Program: 1988 report

    International Nuclear Information System (INIS)

    Deal, D.E.; Abitz, R.J.; Case, J.B.; Crawley, M.E.; Deshler, R.M.; Drez, P.E.; Givens, C.A.; King, R.B.; Myers, J.; Pietz, J.M.; Roggenthen, W.M.; Tyburski, J.R.; Belski, D.S.; Niou, S.; Wallace, M.G.

    1989-12-01

    The data presented in this report are the result of Brine Sampling and Evaluation Program (BSEP) activities at the Waste Isolation Pilot Plant (WIPP) during 1988. These activities, which are a continuation and update of studies that began in 1982 as part of the Site Validation Program, were formalized as the BSEP in 1985 to document and investigate the origins, hydraulic characteristics, extent, and composition of brine occurrences in the Permian Salado Formation, and seepage of that brine into the excavations at the WIPP. Previous BSEP reports (Deal and Case, 1987; Deal and others, 1987) described the results of ongoing activities that monitor brine inflow into boreholes in the facility, moisture content of the Salado Formation, brine geochemistry, and brine weeps and crusts. The information provided in this report updates past work and describes progress made during the calendar year 1988. During 1988, BSEP activities focused on four major areas to describe and quantify brine activity: (1) monitoring of brine inflow parameters, e.g., measuring brines recovered from holes drilled upward from the underground drifts (upholes), downward from the underground drifts (downholes), and near-horizontal holes; (2) characterizing the brine, e.g., the geochemistry of the brine and the presence of bacteria and their possible interactions with experiments and operations; (3) characterizing formation properties associated with the occurrence of brine; e.g., determining the water content of various geologic units, examining these units in boreholes using a video camera system, and measuring their resistivity (conductivity); and (4) modeling to examine the interaction of salt deformation near the workings and brine seepage through the deforming salt. 77 refs., 48 figs., 32 tabs

  12. PCRELAP5: data calculation program for RELAP 5 code

    International Nuclear Information System (INIS)

    Silvestre, Larissa Jacome Barros

    2016-01-01

    Nuclear accidents in the world led to the establishment of rigorous criteria and requirements for nuclear power plant operations by the international regulatory bodies. By using specific computer programs, simulations of various accidents and transients likely to occur at any nuclear power plant are required for certifying and licensing a nuclear power plant. Based on this scenario, some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most widely used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors in Brazil and worldwide. A major difficulty in the simulation by using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. The preparation of the input data requires a great number of mathematical operations to calculate the geometry of the components. Thus, for those calculations performance and preparation of RELAP5 input data, a friendly mathematical preprocessor was designed. The Visual Basic for Application (VBA) for Microsoft Excel demonstrated to be an effective tool to perform a number of tasks in the development of the program. In order to meet the needs of RELAP5 users, the RELAP5 Calculation Program (Programa de Calculo do RELAP5 - PCRELAP5) was designed. The components of the code were codified; all entry cards including the optional cards of each one have been programmed. In addition, an English version for PCRELAP5 was provided. Furthermore, a friendly design was developed in order to minimize the time of preparation of input data and errors committed by users. In this work, the final version of this preprocessor was successfully applied for Safety Injection System (SIS) of Angra 2. (author)

  13. HEXANN-EVALU - a Monte Carlo program system for pressure vessel neutron irradiation calculation

    International Nuclear Information System (INIS)

    Lux, Ivan

    1983-08-01

    The Monte Carlo program HEXANN and the evaluation program EVALU are intended to calculate Monte Carlo estimates of reaction rates and currents in segments of concentric angular regions around a hexagonal reactor-core region. The report describes the theoretical basis, structure and activity of the programs. Input data preparation guides and a sample problem are also included. Theoretical considerations as well as numerical experimental results suggest the user a nearly optimum way of making use of the Monte Carlo efficiency increasing options included in the program

  14. 40 CFR Appendix III to Part 600 - Sample Fuel Economy Label Calculation

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Label Calculation...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. III Appendix III to Part 600—Sample Fuel Economy Label Calculation Suppose that a manufacturer called Mizer...

  15. Monte Carlo sampling on technical parameters in criticality and burn-up-calculations

    International Nuclear Information System (INIS)

    Kirsch, M.; Hannstein, V.; Kilger, R.

    2011-01-01

    The increase in computing power over the recent years allows for the introduction of Monte Carlo sampling techniques for sensitivity and uncertainty analyses in criticality safety and burn-up calculations. With these techniques it is possible to assess the influence of a variation of the input parameters within their measured or estimated uncertainties on the final value of a calculation. The probabilistic result of a statistical analysis can thus complement the traditional method of figuring out both the nominal (best estimate) and the bounding case of the neutron multiplication factor (k eff ) in criticality safety analyses, e.g. by calculating the uncertainty of k eff or tolerance limits. Furthermore, the sampling method provides a possibility to derive sensitivity information, i.e. it allows figuring out which of the uncertain input parameters contribute the most to the uncertainty of the system. The application of Monte Carlo sampling methods has become a common practice in both industry and research institutes. Within this approach, two main paths are currently under investigation: the variation of nuclear data used in a calculation and the variation of technical parameters such as manufacturing tolerances. This contribution concentrates on the latter case. The newly developed SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is introduced. It defines an interface to the well established GRS tool for sensitivity and uncertainty analyses SUSA, that provides the necessary statistical methods for sampling based analyses. The interfaced codes are programs that are used to simulate aspects of the nuclear fuel cycle, such as the criticality safety analysis sequence CSAS5 of the SCALE code system, developed by Oak Ridge National Laboratories, or the GRS burn-up system OREST. In the following, first the implementation of the SUnCISTT will be presented, then, results of its application in an exemplary evaluation of the neutron

  16. 'BLOC' program for elasto-plastic calculation of fissured media

    International Nuclear Information System (INIS)

    Pouyet, P.; Picaut, J.; Costaz, J.L.; Dulac, J.

    1983-01-01

    The method described is used to test failure mechanisms and to calculate the corresponding ultimate loads. The main advantages it offers are simple modelling, the possibility of representing all the prestressing and reinforcement steels simply and correctly, and fewer degrees of freedom, hence lower cost (the program can be run on a microcomputer). However, the model is sensitive to the arrangement of the interface elements, presupposing a given failure mechanism. This normally means testing several different models with different kinematically possible failure patterns. But the ease of modelling and low costs are ideal for this type of approach. (orig./RW)

  17. BASIC Program for the calculation of radioactive activities

    International Nuclear Information System (INIS)

    Cortes P, A.; Tejera R, A.; Becerril V, A.

    1990-04-01

    When one makes a measure of radioactive activity with a detection system that operates with a gamma radiation detector (Ge or of NaI (Tl) detector), it is necessary to take in account parameters and correction factors that making sufficiently difficult and tedious those calculations to using a considerable time by part of the person that carries out these measures. Also, this frequently, can to take to erroneous results. In this work a computer program in BASIC language that solves this problem is presented. (Author)

  18. Tegen - an onedimensional program to calculate a thermoelectric generator

    International Nuclear Information System (INIS)

    Rosa, M.A.P.; Ferreira, P.A.; Castro Lobo, P.D. de.

    1990-01-01

    A computer program for the solution of the one-dimensional, steady-state temperature equation in the arms of a thermoelectric generator. The discretized equations obtained through a finite difference scheme are solved by Gaussian Elimination. Due to nonlinearities caused by the temperature dependence of the coefficients of such equations, an iterative procedure is used to obtain the temperature distribution in the arms. Such distributions are used in the calculation of the efficiency, electric power, load voltage and other relevant parameters for the design of a thermoelectric generator. (author)

  19. A microcomputer program for coupled cycle burnup calculations

    International Nuclear Information System (INIS)

    Driscoll, M.J.; Downar, T.J.; Taylor, E.L.

    1986-01-01

    A program, designated BRACC (Burnup, Reactivity, And Cycle Coupling), has been developed for fuel management scoping calculations, and coded in the BASIC language in an interactive format for use with microcomputers. BRACC estimates batch and cycle burnups for sequential reloads for a variety of initial core conditions, and permits the user to specify either reload batch properties (enrichment, burnable poison reactivity) or the target cycle burnup. Most important fuel management tactics (out-in or low-leakage loading, coastdown, variation in number of assemblies charged) can be simulated

  20. Parallelization for first principles electronic state calculation program

    International Nuclear Information System (INIS)

    Watanabe, Hiroshi; Oguchi, Tamio.

    1997-03-01

    In this report we study the parallelization for First principles electronic state calculation program. The target machines are NEC SX-4 for shared memory type parallelization and FUJITSU VPP300 for distributed memory type parallelization. The features of each parallel machine are surveyed, and the parallelization methods suitable for each are proposed. It is shown that 1.60 times acceleration is achieved with 2 CPU parallelization by SX-4 and 4.97 times acceleration is achieved with 12 PE parallelization by VPP 300. (author)

  1. Activity computer program for calculating ion irradiation activation

    Science.gov (United States)

    Palmer, Ben; Connolly, Brian; Read, Mark

    2017-07-01

    A computer program, Activity, was developed to predict the activity and gamma lines of materials irradiated with an ion beam. It uses the TENDL (Koning and Rochman, 2012) [1] proton reaction cross section database, the Stopping and Range of Ions in Matter (SRIM) (Biersack et al., 2010) code, a Nuclear Data Services (NDS) radioactive decay database (Sonzogni, 2006) [2] and an ENDF gamma decay database (Herman and Chadwick, 2006) [3]. An extended version of Bateman's equation is used to calculate the activity at time t, and this equation is solved analytically, with the option to also solve by numeric inverse Laplace Transform as a failsafe. The program outputs the expected activity and gamma lines of the activated material.

  2. Evaluation of analytical results on DOE Quality Assessment Program Samples

    International Nuclear Information System (INIS)

    Jaquish, R.E.; Kinnison, R.R.; Mathur, S.P.; Sastry, R.

    1985-01-01

    Criteria were developed for evaluating the participants analytical results in the DOE Quality Assessment Program (QAP). Historical data from previous QAP studies were analyzed using descriptive statistical methods to determine the interlaboratory precision that had been attained. Performance criteria used in other similar programs were also reviewed. Using these data, precision values and control limits were recommended for each type of analysis performed in the QA program. Results of the analysis performed by the QAP participants on the November 1983 samples were statistically analyzed and evaluated. The Environmental Measurements Laboratory (EML) values were used as the known values and 3-sigma precision values were used as control limits. Results were submitted by 26 participating laboratories for 49 different radionuclide media combinations. The participants reported 419 results and of these, 350 or 84% were within control limits. Special attention was given to the data from gamma spectral analysis of air filters and water samples. both normal probability and box plots were prepared for each nuclide to help evaluate the distribution of the data. Results that were outside the expected range were identified and suggestions made that laboratories check calculations, and procedures on these results

  3. An application program for fission product decay heat calculations

    International Nuclear Information System (INIS)

    Pham, Ngoc Son; Katakura, Jun-ichi

    2007-10-01

    The precise knowledge of decay heat is one of the most important factors in safety design and operation of nuclear power facilities. Furthermore, decay heat data also play an important role in design of fuel discharges, fuel storage and transport flasks, and in spent fuel management and processing. In this study, a new application program, called DHP (Decay Heat Power program), has been developed for exact decay heat summation calculations, uncertainty analysis, and for determination of the individual contribution of each fission product. The analytical methods were applied in the program without any simplification or approximation, in which all of linear and non-linear decay chains, and 12 decay modes, including ground state and meta-stable states, are automatically identified, and processed by using a decay data library and a fission yield data file, both in ENDF/B-VI format. The window interface of the program is designed with optional properties which is very easy for users to run the code. (author)

  4. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  5. 50 CFR 222.404 - Observer program sampling.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Observer program sampling. 222.404 Section 222.404 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Requirement § 222.404 Observer program sampling. (a) During the program design, NMFS would be guided by the...

  6. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  7. RAFT: a computer program for fault tree risk calculations

    International Nuclear Information System (INIS)

    Seybold, G.D.

    1977-11-01

    A description and user instructions are presented for RAFT, a FORTRAN computer code for calculation of a risk measure for fault tree cut sets. RAFT calculates release quantities and a risk measure based on the product of probability and release quantity for cut sets of fault trees modeling the accidental release of radioactive material from a nuclear fuel cycle facility. Cut sets and their probabilities are supplied as input to RAFT from an external fault tree analysis code. Using the total inventory available of radioactive material, along with release fractions for each event in a cut set, the release terms are calculated for each cut set. Each release term is multiplied by the cut set probability to yield the cut set risk measure. RAFT orders the dominant cut sets on the risk measure. The total risk measure of processed cut sets and their fractional contributions are supplied as output. Input options are available to eliminate redundant cut sets, apply threshold values on cut set probability and risk, and control the total number of cut sets output. Hash addressing is used to remove redundant cut sets from the analysis. Computer hardware and software restrictions are given along with a sample problem and cross-reference table of the code. Except for the use of file management utilities, RAFT is written exclusively in FORTRAN language and is operational on a Control Data, CYBER 74-18--series computer system. 4 figures

  8. Determination of the burn-up of TRIGA fuel elements by calculation with new TRIGLAV program

    International Nuclear Information System (INIS)

    Zagar, T.; Ravnik, M.

    1996-01-01

    The results of fuel element burn-up calculations with new TRIGLAV program are presented. TRIGLAV program uses two dimensional model. Results of calculation are compared to results calculated with program, which uses one dimensional model. The results of fuel element burn-up measurements with reactivity method are presented and compared with the calculated results. (author)

  9. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  10. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  11. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    Science.gov (United States)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  12. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    Science.gov (United States)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  13. Air and smear sample calculational tool for Fluor Hanford Radiological control

    International Nuclear Information System (INIS)

    BAUMANN, B.L.

    2003-01-01

    A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF--13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alpha and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample

  14. 40 CFR 600.211-08 - Sample calculation of fuel economy values for labeling.

    Science.gov (United States)

    2010-07-01

    ... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample calculation of fuel economy...

  15. Calculation of sample problems related to two-phase flow blowdown transients in pressure relief piping of a PWR pressurizer

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1984-02-01

    A method was published, based on the integral method of characteristics, by which the junction and boundary conditions needed in computation of a flow in a piping network can be accurately formulated. The method for the junction and boundary conditions formulation together with the two-step Lax-Wendroff scheme are used in a computer program; the program in turn, is used here in calculating sample problems related to the blowdown transient of a two-phase flow in the piping network downstream of a PWR pressurizer. Independent, nearly exact analytical solutions also are obtained for the sample problems. Comparison of the results obtained by the hybrid numerical technique with the analytical solutions showed generally good agreement. The good numerical accuracy shown by the results of our scheme suggest that the hybrid numerical technique is suitable for both benchmark and design calculations of PWR pressurizer blowdown transients

  16. Gamma self-shielding correction factors calculation for aqueous bulk sample analysis by PGNAA technique

    International Nuclear Information System (INIS)

    Nasrabadi, M.N.; Mohammadi, A.; Jalali, M.

    2009-01-01

    In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.

  17. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  18. CREST : a computer program for the calculation of composition dependent self-shielded cross-sections

    International Nuclear Information System (INIS)

    Kapil, S.K.

    1977-01-01

    A computer program CREST for the calculation of the composition and temperature dependent self-shielded cross-sections using the shielding factor approach has been described. The code includes the editing and formation of the data library, calculation of the effective shielding factors and cross-sections, a fundamental mode calculation to generate the neutron spectrum for the system which is further used to calculate the effective elastic removal cross-sections. Studies to explore the sensitivity of reactor parameters to changes in group cross-sections can also be carried out by using the facility available in the code to temporarily change the desired constants. The final self-shielded and transport corrected group cross-sections can be dumped on cards or magnetic tape in a suitable form for their direct use in a transport or diffusion theory code for detailed reactor calculations. The program is written in FORTRAN and can be accommodated in a computer with 32 K work memory. The input preparation details, sample problem and the listing of the program are given. (author)

  19. Fission product inventory calculation by a CASMO/ORIGEN coupling program

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Do Heon; Kim, Jong Kyung [Hanyang University, Seoul (Korea, Republic of); Choi, Hang Bok; Roh, Gyu Hong; Jung, In Ha [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A CASMO/ORIGEN coupling utility program was developed to predict the composition of all the fission products in spent PWR fuels. The coupling program reads the CASMO output file, modifies the ORIGEN cross section library and reconstructs the ORIGEN input file at each depletion step. In ORIGEN, the burnup equation is solved for actinides and fission products based on the fission reaction rates and depletion flux of CASMO. A sample calculation has been performed using a 14 x 14 PWR fuel assembly and the results are given in this paper. 3 refs., 1 fig., 1 tab. (Author)

  20. Fission product inventory calculation by a CASMO/ORIGEN coupling program

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Do Heon; Kim, Jong Kyung [Hanyang University, Seoul (Korea, Republic of); Choi, Hang Bok; Roh, Gyu Hong; Jung, In Ha [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A CASMO/ORIGEN coupling utility program was developed to predict the composition of all the fission products in spent PWR fuels. The coupling program reads the CASMO output file, modifies the ORIGEN cross section library and reconstructs the ORIGEN input file at each depletion step. In ORIGEN, the burnup equation is solved for actinides and fission products based on the fission reaction rates and depletion flux of CASMO. A sample calculation has been performed using a 14 x 14 PWR fuel assembly and the results are given in this paper. 3 refs., 1 fig., 1 tab. (Author)

  1. Sample results from the interim salt disposition program macrobatch 9 tank 21H qualification samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-11-01

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Macrobatch (Salt Batch) 9 for the Interim Salt Disposition Program (ISDP). This document reports characterization data on the samples of Tank 21H.

  2. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  3. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  4. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  5. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  6. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  7. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  8. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  9. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    Science.gov (United States)

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  10. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  11. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    Science.gov (United States)

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p 30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  13. The study of importance sampling in Monte-carlo calculation of blocking dips

    International Nuclear Information System (INIS)

    Pan Zhengying; Zhou Peng

    1988-01-01

    Angular blocking dips around the axis in Al single crystal of α-particles of about 2 Mev produced at a depth of 0.2 μm are calculated by a Monte-carlo simulation. The influence of the small solid angle emission of particles and the importance sampling in the solid angle emission have been investigated. By means of importance sampling, a more reasonable results with high accuracy are obtained

  14. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    provide the user with programs to calculate and incorporate errors into sample size estimation.

  15. Experimental-calculation technique for Ksub(IC) determination using the samples of decreased dimensions

    International Nuclear Information System (INIS)

    Vinokurov, V.A.; Dymshits, A.V.; Pirusskij, M.V.; Ovsyannikov, B.M.; Kononov, V.V.

    1981-01-01

    A possibility to decrease the size of samples, which is necessary for the reliable determination of fractUre toughness Ksub(1c), is established. The dependences of crack-resistance caracteristics on the sample dimensions are determined experimentally. The static bending tests are made using the 1251 model of ''Instron'' installation with a specially designed device. The samples of the 20KhNMF steel have been tested. It is shown that the Ksub(1c) value, determined for the samples with the largest netto cross section (50x100 rm), is considerably lower than Ksub(1c) values, determined for the samples with the decreased sizes. it is shown that the developed experimental-calculated method of Ksub(1c) determination can be practically used for the samples of the decreased sizes with the introduction of the corresponding amendment coefficient [ru

  16. Youth exposure to violence prevention programs in a national sample.

    Science.gov (United States)

    Finkelhor, David; Vanderminden, Jennifer; Turner, Heather; Shattuck, Anne; Hamby, Sherry

    2014-04-01

    This paper assesses how many children and youth have had exposure to programs aimed at preventing various kinds of violence perpetration and victimization. Based on a national sample of children 5-17, 65% had ever been exposed to a violence prevention program, 55% in the past year. Most respondents (71%) rated the programs as very or somewhat helpful. Younger children (5-9) who had been exposed to higher quality prevention programs had lower levels of peer victimization and perpetration. But the association did not apply to older youth or youth exposed to lower quality programs. Disclosure to authorities was also more common for children with higher quality program exposure who had experienced peer victimizations or conventional crime victimizations. The findings are consistent with possible benefits from violence prevention education programs. However, they also suggest that too few programs currently include efficacious components. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  18. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  19. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  20. Calculation code of heterogeneity effects for analysis of small sample reactivity worth

    International Nuclear Information System (INIS)

    Okajima, Shigeaki; Mukaiyama, Takehiko; Maeda, Akio.

    1988-03-01

    The discrepancy between experimental and calculated central reactivity worths has been one of the most significant interests for the analysis of fast reactor critical experiment. Two effects have been pointed out so as to be taken into account in the calculation as the possible cause of the discrepancy; one is the local heterogeneity effect which is associated with the measurement geometry, the other is the heterogeneity effect on the distribution of the intracell adjoint flux. In order to evaluate these effects in the analysis of FCA actinide sample reactivity worth the calculation code based on the collision probability method was developed. The code can handle the sample size effect which is one of the local heterogeneity effects and also the intracell adjoint heterogeneity effect. (author)

  1. Experimental control of calculation model of scale factor during fracture of circular samples with cracks

    International Nuclear Information System (INIS)

    Gnyp, I.P.; Ganulich, B.K.; Pokhmurskij, V.I.

    1982-01-01

    Reliable methods of estimation of cracking resistance of low-strength plastic materials using the notched samples acceptable for laboratory tests are analysed. Experimental data on the fracture of round notched samples for a number of steels are given. A perfect comparability of calculational and experimental data confirms the legitimacy of the proposed scheme of estimation of the scale factor effect. The necessity of taking into account the strain hardening coefficient at the choice of a sample size for determining the stress intensity factor is pointed out

  2. Seven health physics calculator programs for the HP-41CV

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1984-08-01

    Several user-oriented programs for the Hewlett-Packard HP-41CV are explained. The first program builds, stores, alters, and ages a list of radionuclides. This program only handles single- and double-decay chains. The second program performs convenient conversions for the six nuclides of concern in plutonium handling. The conversions are between mass, activity, and weight percents of the isotopes. The source can be aged and/or neutron generation rates can be computed. The third program is a timekeeping program that improves the process of manually estimating and tracking personnel exposure during high dose rate tasks by replacing the pencil, paper, and stopwatch method. This program requires a time module. The remaining four programs deal with computations of time-integrated air concentrations at various distances from an airborne release. Building wake effects, source depletion by ground deposition, and sector averaging can all be included in the final printout of the X/Q - Hanford and X/Q - Pasquill programs. The shorter versions of these, H/Q and P/Q, compute centerline or sector-averaged values and include a subroutine to facilitate dose estimation by entering dose factors and quantities released. The horizontal and vertical dispersion parameters in the Pasquill-Gifford programs were modeled with simple, two-parameter functions that agreed very well with the usual textbook graphs. 8 references, 7 appendices

  3. A program to calculate pulse transmission responses through transversely isotropic media

    Science.gov (United States)

    Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei

    2018-05-01

    We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.

  4. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  5. Successive collision calculation of resonance absorption (AWBA Development Program)

    International Nuclear Information System (INIS)

    Schmidt, E.; Eisenhart, L.D.

    1980-07-01

    The successive collision method for calculating resonance absorption solves numerically the neutron slowing down problem in reactor lattices. A discrete energy mesh is used with cross sections taken from a Monte Carlo library. The major physical approximations used are isotropic scattering in both the laboratory and center-of-mass systems. This procedure is intended for day-to-day analysis calculations and has been incorporated into the current version of MUFT. The calculational model used for the analysis of the nuclear performance of LWBR includes this resonance absorption procedure. Test comparisons of results with RCPO1 give very good agreement

  6. An adaptive Monte Carlo method under emission point as sampling station for deep penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Yang, Shulin; Pei, Lucheng

    2011-01-01

    Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)

  7. New sampling method in continuous energy Monte Carlo calculation for pebble bed reactors

    International Nuclear Information System (INIS)

    Murata, Isao; Takahashi, Akito; Mori, Takamasa; Nakagawa, Masayuki.

    1997-01-01

    A pebble bed reactor generally has double heterogeneity consisting of two kinds of spherical fuel element. In the core, there exist many fuel balls piled up randomly in a high packing fraction. And each fuel ball contains a lot of small fuel particles which are also distributed randomly. In this study, to realize precise neutron transport calculation of such reactors with the continuous energy Monte Carlo method, a new sampling method has been developed. The new method has been implemented in the general purpose Monte Carlo code MCNP to develop a modified version MCNP-BALL. This method was validated by calculating inventory of spherical fuel elements arranged successively by sampling during transport calculation and also by performing criticality calculations in ordered packing models. From the results, it was confirmed that the inventory of spherical fuel elements could be reproduced using MCNP-BALL within a sufficient accuracy of 0.2%. And the comparison of criticality calculations in ordered packing models between MCNP-BALL and the reference method shows excellent agreement in neutron spectrum as well as multiplication factor. MCNP-BALL enables us to analyze pebble bed type cores such as PROTEUS precisely with the continuous energy Monte Carlo method. (author)

  8. Adaptive sampling program support for expedited site characterization

    International Nuclear Information System (INIS)

    Johnson, R.

    1993-01-01

    Expedited site characterizations offer substantial savings in time and money when assessing hazardous waste sites. Key to some of these savings is the ability to adapt a sampling program to the ''real-time'' data generated by an expedited site characterization. This paper presents a two-prong approach to supporting adaptive sampling programs: a specialized object-oriented database/geographical information system for data fusion, management and display; and combined Bayesian/geostatistical methods for contamination extent estimation and sample location selection

  9. Glass sampling program during DWPF Integrated Cold Runs

    International Nuclear Information System (INIS)

    Plodinec, M.J.

    1990-01-01

    The described glass sampling program is designed to achieve two objectives: To demonstrate Defense Waste Processing Facility (DWPF) ability to control and verify the radionuclide release properties of the glass product; To confirm DWPF's readiness to obtain glass samples during production, and SRL's readiness to analyze and test those samples remotely. The DWPF strategy for control of the radionuclide release properties of the glass product, and verification of its acceptability are described in this report. The basic approach of the test program is then defined

  10. Dose calculation for 40K ingestion in samples of beans using spectrometry and MCNP

    International Nuclear Information System (INIS)

    Garcez, R.W.D.; Lopes, J.M.; Silva, A.X.; Domingues, A.M.; Lima, M.A.F.

    2014-01-01

    A method based on gamma spectroscopy and on the use of voxel phantoms to calculate dose due to ingestion of 40 K contained in bean samples are presented in this work. To quantify the activity of radionuclide, HPGe detector was used and the data entered in the input file of MCNP code. The highest value of equivalent dose was 7.83 μSv.y -1 in the stomach for white beans, whose activity 452.4 Bq.Kg -1 was the highest of the five analyzed. The tool proved to be appropriate when you want to calculate the dose in organs due to ingestion of food. (author)

  11. Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review

    Science.gov (United States)

    Miao, Yinglong; McCammon, J. Andrew

    2016-01-01

    Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations. PMID:27453631

  12. Numerical calculation of elastohydrodynamic lubrication methods and programs

    CERN Document Server

    Huang, Ping

    2015-01-01

    The book not only offers scientists and engineers a clear inter-disciplinary introduction and orientation to all major EHL problems and their solutions but, most importantly, it also provides numerical programs on specific application in engineering. A one-stop reference providing equations and their solutions to all major elastohydrodynamic lubrication (EHL) problems, plus numerical programs on specific applications in engineering offers engineers and scientists a clear inter-disciplinary introduction and a concise program for practical engineering applications to most important EHL problems

  13. Code Betal to calculation Alpha/Beta activities in environmental samples

    International Nuclear Information System (INIS)

    Romero, L.; Travesi, A.

    1983-01-01

    A codes, BETAL, was developed, written in FORTRAN IV, to automatize calculations and presentations of the result of the total alpha-beta activities measurements in environmental samples. This code performs the necessary calculations for transformation the activities measured in total counts, to pCi/1., bearing in mind the efficiency of the detector used and the other necessary parameters. Further more, it appraise the standard deviation of the result, and calculus the Lower limit of detection for each measurement. This code is written in iterative way by screen-operator dialogue, and asking the necessary data to perform the calculation of the activity in each case by a screen label. The code could be executed through any screen and keyboard terminal, (whose computer accepts Fortran IV) with a printer connected to the said computer. (Author) 5 refs

  14. Calculation of the average radiological detriment of two samples from a breast screening programme

    International Nuclear Information System (INIS)

    Ramos, M.; Sanchez, A.M.; Verdu, G.; Villaescusa, J.I.; Salas, M.D.; Cuevas, M.D.

    2002-01-01

    In 1992 started in the Comunidad Valenciana the Breast Cancer Screening Programme. The programme is oriented to asymptomatic women between 45 and 65 years old, with two mammograms in each breast for the first time that participate and a simple one in later interventions. Between November of 2000 and March of 2001 was extracted a first sample of 100 woman records for all units of the programme. The data extracted in each sample were the kV-voltage, the X-ray tube load and the breast thickness and age of the woman exposed, used directly in dose and detriment calculation. By means of MCNP-4B code and according to the European Protocol for the quality control of the physical and technical aspects of mammography screening, the average total and glandular doses were calculated, and later compared

  15. Hanford high level waste: Sample Exchange/Evaluation (SEE) Program

    International Nuclear Information System (INIS)

    King, A.G.

    1994-08-01

    The Pacific Northwest Laboratory (PNL)/Analytical Chemistry Laboratory (ACL) and the Westinghouse Hanford Company (WHC)/Process Analytical Laboratory (PAL) provide analytical support services to various environmental restoration and waste management projects/programs at Hanford. In response to a US Department of Energy -- Richland Field Office (DOE-RL) audit, which questioned the comparability of analytical methods employed at each laboratory, the Sample Exchange/Exchange (SEE) program was initiated. The SEE Program is a selfassessment program designed to compare analytical methods of the PAL and ACL laboratories using sitespecific waste material. The SEE program is managed by a collaborative, the Quality Assurance Triad (Triad). Triad membership is made up of representatives from the WHC/PAL, PNL/ACL, and WHC Hanford Analytical Services Management (HASM) organizations. The Triad works together to design/evaluate/implement each phase of the SEE Program

  16. Microdosimetry calculations for monoenergetic electrons using Geant4-DNA combined with a weighted track sampling algorithm.

    Science.gov (United States)

    Famulari, Gabriel; Pater, Piotr; Enger, Shirin A

    2017-07-07

    The aim of this study was to calculate microdosimetric distributions for low energy electrons simulated using the Monte Carlo track structure code Geant4-DNA. Tracks for monoenergetic electrons with kinetic energies ranging from 100 eV to 1 MeV were simulated in an infinite spherical water phantom using the Geant4-DNA extension included in Geant4 toolkit version 10.2 (patch 02). The microdosimetric distributions were obtained through random sampling of transfer points and overlaying scoring volumes within the associated volume of the tracks. Relative frequency distributions of energy deposition f(>E)/f(>0) and dose mean lineal energy ([Formula: see text]) values were calculated in nanometer-sized spherical and cylindrical targets. The effects of scoring volume and scoring techniques were examined. The results were compared with published data generated using MOCA8B and KURBUC. Geant4-DNA produces a lower frequency of higher energy deposits than MOCA8B. The [Formula: see text] values calculated with Geant4-DNA are smaller than those calculated using MOCA8B and KURBUC. The differences are mainly due to the lower ionization and excitation cross sections of Geant4-DNA for low energy electrons. To a lesser extent, discrepancies can also be attributed to the implementation in this study of a new and fast scoring technique that differs from that used in previous studies. For the same mean chord length ([Formula: see text]), the [Formula: see text] calculated in cylindrical volumes are larger than those calculated in spherical volumes. The discrepancies due to cross sections and scoring geometries increase with decreasing scoring site dimensions. A new set of [Formula: see text] values has been presented for monoenergetic electrons using a fast track sampling algorithm and the most recent physics models implemented in Geant4-DNA. This dataset can be combined with primary electron spectra to predict the radiation quality of photon and electron beams.

  17. GUIDE TO CALCULATING TRANSPORT EFFICIENCY OF AEROSOLS IN OCCUPATIONAL AIR SAMPLING SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.

    2013-11-12

    This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.

  18. Calculation of Collective Variable-based PMF by Combining WHAM with Umbrella Sampling

    International Nuclear Information System (INIS)

    Xu Wei-Xin; Li Yang; Zhang, John Z. H.

    2012-01-01

    Potential of mean force (PMF) with respect to localized reaction coordinates (RCs) such as distance is often applied to evaluate the free energy profile along the reaction pathway for complex molecular systems. However, calculation of PMF as a function of global RCs is still a challenging and important problem in computational biology. We examine the combined use of the weighted histogram analysis method and the umbrella sampling method for the calculation of PMF as a function of a global RC from the coarse-grained Langevin dynamics simulations for a model protein. The method yields the folding free energy profile projected onto a global RC, which is in accord with benchmark results. With this method rare global events would be sufficiently sampled because the biased potential can be used for restricting the global conformation to specific regions during free energy calculations. The strategy presented can also be utilized in calculating the global intra- and intermolecular PMF at more detailed levels. (cross-disciplinary physics and related areas of science and technology)

  19. ORBITALES. A program for the calculation of wave functions with an analytical central potential

    International Nuclear Information System (INIS)

    Yunta Carretero; Rodriguez Mayquez, E.

    1974-01-01

    In this paper is described the objective, basis, carrying out in FORTRAN language and use of the program ORBITALES. This program calculate atomic wave function in the case of ths analytical central potential (Author) 8 refs

  20. The transition equation of the state intensities for exciton model and the calculation program

    International Nuclear Information System (INIS)

    Yu Xian; Zheng Jiwen; Liu Guoxing; Chen Keliang

    1995-01-01

    An equation set of the exciton model is given and calculation program is developed. The process of approaching to equilibrium state has been investigated with the program for 12 C + 64 Ni reaction at energy 72 MeV

  1. A Hartree-Fock program for atomic structure calculations

    International Nuclear Information System (INIS)

    Mitroy, J.

    1999-01-01

    The Hartree-Fock equations for a general open shell atom are described. The matrix equations that result when the single particle orbitals are written in terms of a linear combination of analytic basis functions are derived. Attention is paid to the complexities that occur when open shells are present. The specifics of a working FORTRAN program which is available for public use are described. The program has the flexibility to handle either Slater-type orbitals or Gaussian-type orbitals. It can be obtained over the internet at http://lacebark.ntu.edu.au/j_mitroy/research/atomic.htm Copyright (1999) CSIRO Australia

  2. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  3. A FORTRAN program for an IBM PC compatible computer for calculating kinematical electron diffraction patterns

    International Nuclear Information System (INIS)

    Skjerpe, P.

    1989-01-01

    This report describes a computer program which is useful in transmission electron microscopy. The program is written in FORTRAN and calculates kinematical electron diffraction patterns in any zone axis from a given crystal structure. Quite large unit cells, containing up to 2250 atoms, can be handled by the program. The program runs on both the Helcules graphic card and the standard IBM CGA card

  4. 24 CFR 4001.203 - Calculation of upfront and annual mortgage insurance premiums for Program mortgages.

    Science.gov (United States)

    2010-04-01

    ... mortgage insurance premiums for Program mortgages. 4001.203 Section 4001.203 Housing and Urban Development... HOMEOWNERS PROGRAM HOPE FOR HOMEOWNERS PROGRAM Rights and Obligations Under the Contract of Insurance § 4001.203 Calculation of upfront and annual mortgage insurance premiums for Program mortgages. (a...

  5. PULSTRI-1 computer program for mixed core pulse calculation

    International Nuclear Information System (INIS)

    Ravnik, M.; Mele, I.; Dimic, V.

    1990-01-01

    PUISTRI-1 is a computer code designed for calculations of the pulse parameters of TRIGA Mark II reactor with mixed core. The code is provided with data for four types of fuel elements: standard 8.5 and 12 w/o, LEU and FLIP. The pulse parameters, such as maximum power, prompt pulse energy and average fuel temperatures are calculated in adiabatic point kinetics, approximation, modified by taking into account temperature dependence of fuel temperature reactivity coefficient and thermal capacity factor averaged over all elements in the core. Maximal fuel temperature at power peaking location is calculated from total released energy using total power peaking factor and heat capacity of the element at the location of the power peaking. Results of the code were compared to data found in references (mainly General Atomics safety analysis reports) showing good agreement for all main pulse parameters. The most important parameters, average and maximal fuel temperature, are found to be systematically slightly overpredicted (20 C and 50 C, respectively). Other parameters (energy, peak power, width) agree within ± 10 % to the reference values. The code is written in FORTRAN for IBM PC computer. The input is user friendly. running time of IBM PC AT is a few seconds. It is designed for practical applications in pulse experiments as an analytical tool for predicting pulse parameters. (orig.)

  6. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  7. Calculation of Complexity Costs – An Approach for Rationalizing a Product Program

    DEFF Research Database (Denmark)

    Hansen, Christian Lindschou; Mortensen, Niels Henrik; Hvam, Lars

    2012-01-01

    This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component invento...... of a product program. These findings represent an improved decision basis for the planning of reactive and proactive initiatives of rationalizing a product program.......This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component...... inventories at the factory sites, all the way to the distribution of finished goods from distribution centers to the customers. The method proposes a step-wise approach including the analysis, quantification and allocation of product program complexity costs by the means of identifying of a number...

  8. EELOSS: the program for calculation of electron energy loss data

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi

    1980-10-01

    A computer code EELOSS has been developed to obtain the electron energy loss data required for shielding and dosimetry of beta- and gamma-rays in nuclear plants. With this code, the following data are obtainable for any energy from 0.01 to 15 MeV in any medium (metal, insulator, gas, compound, or mixture) composed of any choice of 69 elements with atomic number 1 -- 94: a) Collision stopping power, b) Restricted collision stopping power, c) Radiative stopping power, and d) Bremsstrahlung production cross section. The availability of bremsstrahlung production cross section data obtained by the EELOSS code is demonstrated by the comparison of calculated gamma-ray spectrum with measured one in Pb layer, where electron-photon cascade is included implicitly. As a result, it is concluded that the uncertainty in the bremsstrahlung production cross sections is negligible in the practical shielding calculations of gamma rays of energy less than 15 MeV, since the bremsstrahlung production cross sections increase with the gamma-ray energy and the uncertainty for them decreases with increasing the gamma-ray energy. Furthermore, the accuracy of output data of the EELOSS code is evaluated in comparison with experimental data, and satisfactory agreements are observed concerning the stopping power. (J.P.N.)

  9. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    Science.gov (United States)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  10. A model for steady-state and transient determination of subcooled boiling for calculations coupling a thermohydraulic and a neutron physics calculation program for reactor core calculation

    International Nuclear Information System (INIS)

    Mueller, R.G.

    1987-06-01

    Due to the strong influence of vapour bubbles on the nuclear chain reaction, an exact calculation of neutron physics and thermal hydraulics in light water reactors requires consideration of subcooled boiling. To this purpose, in the present study a dynamic model is derived from the time-dependent conservation equations. It contains new methods for the time-dependent determination of evaporation and condensation heat flow and for the heat transfer coefficient in subcooled boiling. Furthermore, it enables the complete two-phase flow region to be treated in a consistent manner. The calculation model was verified using measured data of experiments covering a wide range of thermodynamic boundary conditions. In all cases very good agreement was reached. The results from the coupling of the new calculation model with a neutron kinetics program proved its suitability for the steady-state and transient calculation of reactor cores. (orig.) [de

  11. Measurement assurance program for LSC analyses of tritium samples

    International Nuclear Information System (INIS)

    Levi, G.D. Jr.; Clark, J.P.

    1997-01-01

    Liquid Scintillation Counting (LSC) for Tritium is done on 600 to 800 samples daily as part of a contamination control program at the Savannah River Site's Tritium Facilities. The tritium results from the LSCs are used: to release items as radiologically clean; to establish radiological control measures for workers; and to characterize waste. The following is a list of the sample matrices that are analyzed for tritium: filter paper smears, aqueous, oil, oily rags, ethylene glycol, ethyl alcohol, freon and mercury. Routine and special causes of variation in standards, counting equipment, environment, operators, counting times, samples, activity levels, etc. produce uncertainty in the LSC measurements. A comprehensive analytical process measurement assurance program such as JTIPMAP trademark has been implemented. The process measurement assurance program is being used to quantify and control many of the sources of variation and provide accurate estimates of the overall measurement uncertainty associated with the LSC measurements. The paper will describe LSC operations, process improvements, quality control and quality assurance programs along with future improvements associated with the implementation of the process measurement assurance program

  12. TRU Waste Sampling Program: Volume I. Waste characterization

    International Nuclear Information System (INIS)

    Clements, T.L. Jr.; Kudera, D.E.

    1985-09-01

    Volume I of the TRU Waste Sampling Program report presents the waste characterization information obtained from sampling and characterizing various aged transuranic waste retrieved from storage at the Idaho National Engineering Laboratory and the Los Alamos National Laboratory. The data contained in this report include the results of gas sampling and gas generation, radiographic examinations, waste visual examination results, and waste compliance with the Waste Isolation Pilot Plant-Waste Acceptance Criteria (WIPP-WAC). A separate report, Volume II, contains data from the gas generation studies

  13. Experience with a routine fecal sampling program for plutonium workers

    International Nuclear Information System (INIS)

    Bihl, D.E.; Buschbom, R.L.; Sula, M.J.

    1993-01-01

    A quarterly fecal sampling program was conducted at the U. S. Department of Energy's Hanford site for congruent to 100 workers at risk for an intake of plutonium oxide and other forms of plutonium. To our surprise, we discovered that essentially all of the workers were excreting detectable activities of plutonium. Further investigation showed that the source was frequent, intermittent intakes at levels below detectability by normal workplace monitoring, indicating the extraordinary sensitivity of fecal sampling. However, the experience of this study also indicated that the increased sensitivity of routine fecal sampling relative to more common bioassay methods is offset by many problems. These include poor worker cooperation; difficulty in distinguishing low-level chronic intakes from a more significant, acute intake; difficulty in eliminating interference from ingested plutonium; and difficulty in interpreting what a single void means in terms of 24-h excretion. Recommendations for a routine fecal program include providing good communication to workers and management about reasons and logistics of fecal sampling prior to starting, using annual (instead of quarterly) fecal sampling for class Y plutonium, collecting samples after workers have been away from plutonium exposure for a least 3 d, and giving serious consideration to improving urinalysis sensitivity rather than going to routine fecal sampling

  14. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Calculation of absolute protein-ligand binding free energy using distributed replica sampling.

    Science.gov (United States)

    Rodinger, Tomas; Howell, P Lynne; Pomès, Régis

    2008-10-21

    Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.

  16. Calculation of coincidence summing corrections for a specific small soil sample geometry

    Energy Technology Data Exchange (ETDEWEB)

    Helmer, R.G.; Gehrke, R.J.

    1996-10-01

    Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.

  17. Air sampling program at the Portsmouth Gaseous Diffusion Plant

    International Nuclear Information System (INIS)

    Hulett, S.H.

    1975-01-01

    An extensive air sampling program has been developed at the Portsmouth Gaseous Diffusion Plant for monitoring the concentrations of radioactive aerosols present in the atmosphere on plantsite as well as in the environs. The program is designed to minimize exposures of employees and the environment to airborne radioactive particulates. Five different air sampling systems, utilizing either filtration or impaction, are employed for measuring airborne alpha and beta-gamma activity produced from 235 U and 234 Th, respectively. Two of the systems have particle selection capabilities: a personal sampler with a 10-mm nylon cyclone eliminates most particles larger than about 10 microns in diameter; and an Annular Kinetic Impactor collects particulates greater than 0.4 microns in diameter which have a density greater than 12-15 gm/cm 3 . A Hi-Volume Air Sampler and an Eberline Model AIM-3 Scintillation Air Monitor are used in collecting short-term samples for assessing compliance with ''ceiling'' standards or peak concentration limits. A film-sort aperture IBM card system is utilized for continuous 8-hour samples. This sampling program has proven to be both practical and effective for assuring accurate monitoring of the airborne activity associated with plant operations

  18. Electric field gradient calculation at atomic site of In implanted ZnO samples

    International Nuclear Information System (INIS)

    Abreu, Y.; Cruz, C. M.; Leyva, A.; Pinnera; Van Espen, P.; Perez, C.

    2011-01-01

    The electric field gradient (EFG) calculated for 111 In→ 111 Cd implanted ZnO samples is reported. The study was made for ideal hexagonal ZnO structures and super-cells considering the In implantation environment at the cation site using the 'WIEN2k' code within the GGA(+U) approximation. The obtained EFG values are in good agreement with the experimental reports for ideal ZnO and 111 In→ 111 Cd implanted structures; measured by perturbed angular correlation (PAC) and Moessbauer spectroscopy. The attribution of substitutional incorporation of 111 In at the ZnO cation site after annealing was confirmed. (Author)

  19. GENGTC-JB: a computer program to calculate temperature distribution for cylindrical geometry capsule

    International Nuclear Information System (INIS)

    Someya, Hiroyuki; Kobayashi, Toshiki; Niimi, Motoji; Hoshiya, Taiji; Harayama, Yasuo

    1987-09-01

    In design of JMTR irradiation capsules contained specimens, a program (named GENGTC) has been generally used to evaluate temperature distributions in the capsules. The program was originally compiled by ORNL(U.S.A.) and consisted of very simple calculation methods. From the incorporated calculation methods, the program is easy to use, and has many applications to the capsule design. However, it was considered to replace original computing methods with advanced ones, when the program was checked from a standpoint of the recent computer abilities, and also to be complicated in data input. Therefore, the program was versioned up as aim to make better calculations and improve input method. The present report describes revised calculation methods and input/output guide of the version-up program. (author)

  20. Development and benchmark verification of a parallelized Monte Carlo burnup calculation program MCBMPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Yang Xin; Wang Guanbo

    2014-01-01

    MCBMPI, a parallelized burnup calculation program, was developed. The program is modularized. Neutron transport calculation module employs the parallelized MCNP5 program MCNP5MPI, and burnup calculation module employs ORIGEN2, with the MPI parallel zone decomposition strategy. The program system only consists of MCNP5MPI and an interface subroutine. The interface subroutine achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, data exchanging with MCNP5MPI. Also, the program was verified with the Pressurized Water Reactor (PWR) cell burnup benchmark, the results showed that it's capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  1. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  2. A versatile program for the calculation of linear accelerator room shielding.

    Science.gov (United States)

    Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M

    2018-03-22

    This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.

  3. Development, application and also modern condition of the calculated program Imitator of a reactor

    International Nuclear Information System (INIS)

    Aver'yanova, S.P.; Kovel', A.I.; Mamichev, V.V.; Filimonov, P.E.

    2008-01-01

    Features of the calculated program Imitator of a reactor (IR) for WWER-1000 operation simulation are discussed. It is noted that IR application at NPP provides for the project program (BIPR-7) on-line working. This offers a new means, on the one hand, for the efficient prediction and information support of operator, on the other hand, for the verification and development of calculated scheme and neutron-physical model of the WWER-1000 projection program [ru

  4. DEVELOPMENT OF PROGRAM MODULE FOR CALCULATING SPEED OF TITANIC PLASMA SEDIMENTATION IN ENVIRONMENT OF TECHNOLOGICAL GAS

    Directory of Open Access Journals (Sweden)

    S. A. Ivaschenko

    2006-01-01

    Full Text Available The program module has been developed on the basis of package of applied MATLAB programs which allows to calculate speed of coating sedimentation over the section of plasma stream taking into account magnetic field influence of a stabilizing coil, and also to correct the obtained value of sedimentation speed depending on the value of negative accelerating potential, arch current, technological gas pressure. The program resolves visualization of calculation results.

  5. Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.

    Science.gov (United States)

    Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A

    2015-10-26

    Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J.

    2007-03-01

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future

  7. A Sample Calculation of Tritium Production and Distribution at VHTR by using TRITGO Code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Kim, D. H.; Lee, W. J

    2007-03-15

    TRITGO code was developed for estimating the tritium production and distribution of high temperature gas cooled reactor(HTGR), especially GTMHR350 by General Atomics. In this study, the tritium production and distribution of NHDD was analyzed by using TRITGO Code. The TRITGO code was improved by a simple method to calculate the tritium amount in IS Loop. The improved TRITGO input for the sample calculation was prepared based on GTMHR600 because the NHDD has been designed referring GTMHR600. The GTMHR350 input with related to the tritium distribution was directly used. The calculated tritium activity among the hydrogen produced in IS-Loop is 0.56 Bq/g- H2. This is a very satisfying result considering that the limited tritium activity of Japanese Regulation Guide is 5.6 Bq/g-H2. The basic system to analyze the tritium production and the distribution by using TRITGO was successfully constructed. However, there exists some uncertainties in tritium distribution models, the suggested method for IS-Loop, and the current input was not for NHDD but for GTMHR600. The qualitative analysis for the distribution model and the IS-Loop model and the quantitative analysis for the input should be done in the future.

  8. Cell verification of parallel burnup calculation program MCBMPI based on MPI

    International Nuclear Information System (INIS)

    Yang Wankui; Liu Yaoguang; Ma Jimin; Wang Guanbo; Yang Xin; She Ding

    2014-01-01

    The parallel burnup calculation program MCBMPI was developed. The program was modularized. The parallel MCNP5 program MCNP5MPI was employed as neutron transport calculation module. And a composite of three solution methods was used to solve burnup equation, i.e. matrix exponential technique, TTA analytical solution, and Gauss Seidel iteration. MPI parallel zone decomposition strategy was concluded in the program. The program system only consists of MCNP5MPI and burnup subroutine. The latter achieves three main functions, i.e. zone decomposition, nuclide transferring and decaying, and data exchanging with MCNP5MPI. Also, the program was verified with the pressurized water reactor (PWR) cell burnup benchmark. The results show that it,s capable to apply the program to burnup calculation of multiple zones, and the computation efficiency could be significantly improved with the development of computer hardware. (authors)

  9. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Calculation of depleted uranium concentration in dental fillings samples using the nuclear track detector CR-39

    International Nuclear Information System (INIS)

    Mahdi, K. H.; Subhi, A. T.; Tawfiq, N. F.

    2012-12-01

    The purpose of this study is to determine the concentration of depleted uranium in dental fillings samples, which were obtained some hospital and dental office, sale of materials deployed in Iraq. 8 samples were examined from two different fillings and lead-filling (amalgam) and composite filling (plastic). concentrations of depleted uranium were determined in these samples using a nuclear track detector CR-39 through the recording of the tracks left by of fragments of fission resulting from the reaction 2 38U (n, f). The samples are bombarded by neutrons emitted from the neutron source (2 41A m-Be) with flux of ( 10 5 n. cm- 2. s -1 ). The period of etching to show the track of fission fragments is 5 hours using NaOH solution with normalization (6.25N), and temperature (60 o C ). Concentration of depleted uranium were calculated by comparison with standard samples. The result that obtained showed that the value of the weighted average for concentration of uranium in the samples fillings (5.54± 1.05) ppm lead to thr filling (amalgam) and (5.33±0.6) ppm of the filling composite (plastic). The hazard- index, the absorbed dose and the effective dose for these concentration were determined. The obtained results of the effective dose for each of the surface of the bone and skin (as the areas most affected by this compensation industrial) is (0.56 mSv / y) for the batting lead (amalgam) and (0.54 mSv / y) for the filling composite (plastic). From the results of study it was that the highest rate is the effective dose to a specimen amalgam filling (0.68 mSv / y) which is less than the allowable limit for exposure of the general people set the World Health Organization (WHO), a (1 mSv / y). (Author)

  11. SCMAG series of programs for calculating superconducting dipole and quadrupole magnets

    International Nuclear Information System (INIS)

    Green, M.A.

    1974-10-01

    Programs SCMAG1, SCMAG2, SCMAG3, and SCMAG4 are a group of programs used to design and calculate the characteristics of conductor dominated superconducting dipole and quadrupole magnets. These magnets are used to bend and focus beams of high energy particles and are being used to design the superconducting magnets for the LBL ESCAR accelerator. The four programs are briefly described. (TFD)

  12. Easy-to-use application programs for decay heat and delayed neutron calculations on personal computers

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, Kazuhiro [Nagoya Univ. (Japan)

    1998-03-01

    Application programs for personal computers are developed to calculate the decay heat power and delayed neutron activity from fission products. The main programs can be used in any computers from personal computers to main frames because their sources are written in Fortran. These programs have user friendly interfaces to be used easily not only for research activities but also for educational purposes. (author)

  13. Program for calculating multi-component high-intense ion beam transport

    International Nuclear Information System (INIS)

    Kazarinov, N.Yu.; Prejzendorf, V.A.

    1985-01-01

    The CANAL program for calculating transport of high-intense beams containing ions with different charges in a channel consisting of dipole magnets and quadrupole lenses is described. The equations determined by the method of distribution function momenta and describing coordinate variations of the local mass centres and r.m.s. transverse sizes of beams with different charges form the basis of the calculation. The program is adapted for the CDC-6500 and SM-4 computers. The program functioning is organized in the interactive mode permitting to vary the parameters of any channel element and quickly choose the optimum version in the course of calculation. The calculation time for the CDC-6500 computer for the 30-40 m channel at the integration step of 1 cm is about 1 min. The program is used for calculating the channel for the uranium ion beam injection from the collective accelerator into the heavy-ion synchrotron

  14. TEMP-M program for thermal-hydraulic calculation of fast reactor fuel assemblies

    International Nuclear Information System (INIS)

    Bogoslovskaya, C.P.; Sorokin, A.P.; Tikhomirov, B.B.; Titov, P.A.; Ushakov, P.A.

    1983-01-01

    TEMP-M program (Fortran, BESM-6 computer) for thermal-hydraulic calculation of fast reactor fuel assemblies is described. Results of calculation of temperature field in a 127 fuel element assembly of BN-600, reactor accomplished according to TEMP-N program are considered as an example. Algorithm, realized in the program, enables to calculate the distributions of coolant heating, fuel element temperature (over perimeter and length) and assembly shell temperature. The distribution of coolant heating in assembly channels is determined from a solution of the balance equation system which accounts for interchannel exchange, nonadiabatic conditions on the assembly shell. The TEMP-M program gives necessary information for calculation of strength, seviceability of fast reactor core elements, serves an effective instrument for calculations when projecting reactor cores and analyzing thermal-hydraulic characteristics of operating reactor fuel assemblies

  15. A procedure and program to calculate shuttle mask advantage

    Science.gov (United States)

    Balasinski, A.; Cetin, J.; Kahng, A.; Xu, X.

    2006-10-01

    A well-known recipe for reducing mask cost component in product development is to place non-redundant elements of layout databases related to multiple products on one reticle plate [1,2]. Such reticles are known as multi-product, multi-layer, or, in general, multi-IP masks. The composition of the mask set should minimize not only the layout placement cost, but also the cost of the manufacturing process, design flow setup, and product design and introduction to market. An important factor is the quality check which should be expeditious and enable thorough visual verification to avoid costly modifications once the data is transferred to the mask shop. In this work, in order to enable the layer placement and quality check procedure, we proposed an algorithm where mask layers are first lined up according to the price and field tone [3]. Then, depending on the product die size, expected fab throughput, and scribeline requirements, the subsequent product layers are placed on the masks with different grades. The actual reduction of this concept to practice allowed us to understand the tradeoffs between the automation of layer placement and setup related constraints. For example, the limited options of the numbers of layer per plate dictated by the die size and other design feedback, made us consider layer pairing based not only on the final price of the mask set, but also on the cost of mask design and fab-friendliness. We showed that it may be advantageous to introduce manual layer pairing to ensure that, e.g., all interconnect layers would be placed on the same plate, allowing for easy and simultaneous design fixes. Another enhancement was to allow some flexibility in mixing and matching of the layers such that non-critical ones requiring low mask grade would be placed in a less restrictive way, to reduce the count of orphan layers. In summary, we created a program to automatically propose and visualize shuttle mask architecture for design verification, with

  16. GoSam: A program for automated one-loop calculations

    International Nuclear Information System (INIS)

    Cullen, G; Greiner, N; Heinrich, G; Mastrolia, P; Reiter, T; Luisoni, G; Ossola, G; Tramontano, F

    2012-01-01

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples.

  17. GoSam. A program for automated one-loop calculations

    International Nuclear Information System (INIS)

    Cullen, G.; Greiner, N.; Heinrich, G.; Reiter, T.; Luisoni, G.

    2011-11-01

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples. (orig.)

  18. GoSam. A program for automated one-loop calculations

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, G. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Greiner, N.; Heinrich, G.; Reiter, T. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, G. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, P. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, G. [City Univ. of New York, NY (United States). New York City College of Technology; Tramontano, F. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2011-11-15

    The program package GoSam is presented which aims at the automated calculation of one-loop amplitudes for multi-particle processes. The amplitudes are generated in terms of Feynman diagrams and can be reduced using either D-dimensional integrand-level decomposition or tensor reduction, or a combination of both. GoSam can be used to calculate one-loop corrections to both QCD and electroweak theory, and model files for theories Beyond the Standard Model can be linked as well. A standard interface to programs calculating real radiation is also included. The flexibility of the program is demonstrated by various examples. (orig.)

  19. ROBOT3: a computer program to calculate the in-pile three-dimensional bowing of cylindrical fuel rods (AWBA Development Program)

    International Nuclear Information System (INIS)

    Kovscek, S.E.; Martin, S.E.

    1982-10-01

    ROBOT3 is a FORTRAN computer program which is used in conjunction with the CYGRO5 computer program to calculate the time-dependent inelastic bowing of a fuel rod using an incremental finite element method. The fuel rod is modeled as a viscoelastic beam whose material properties are derived as perturbations of the CYGRO5 axisymmetric model. Fuel rod supports are modeled as displacement, force, or spring-type nodal boundary conditions. The program input is described and a sample problem is given

  20. Monte Carlo Calculation of Thermal Neutron Inelastic Scattering Cross Section Uncertainties by Sampling Perturbed Phonon Spectra

    Science.gov (United States)

    Holmes, Jesse Curtis

    established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.

  1. EML Surface Air Sampling Program, 1990--1993 data

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, R.J.; Sanderson, C.G.; Kada, J.

    1995-11-01

    Measurements of the concentrations of specific atmospheric radionuclides in air filter samples collected for the Environmental Measurements Laboratory`s Surface Air Sampling Program (SASP) during 1990--1993, with the exception of April 1993, indicate that anthropogenic radionuclides, in both hemispheres, were at or below the lower limits of detection for the sampling and analytical techniques that were used to collect and measure them. The occasional detection of {sup 137}Cs in some air filter samples may have resulted from resuspension of previously deposited debris. Following the April 6, 1993 accident and release of radionuclides into the atmosphere at a reprocessing plant in the Tomsk-7 military nuclear complex located 16 km north of the Siberian city of Tomsk, Russia, weekly air filter samples from Barrow, Alaska; Thule, Greenland and Moosonee, Canada were selected for special analyses. The naturally occurring radioisotopes that the authors measure, {sup 7}Be and {sup 210}Pb, continue to be detected in most air filter samples. Variations in the annual mean concentrations of {sup 7}Be at many of the sites appear to result primarily from changes in the atmospheric production rate of this cosmogenic radionuclide. Short-term variations in the concentrations of {sup 7}Be and {sup 210}Pb continued to be observed at many sites at which weekly air filter samples were analyzed. The monthly gross gamma-ray activity and the monthly mean surface air concentrations of {sup 7}Be, {sup 95}Zr, {sup 137}Cs, {sup 144}Ce, and {sup 210}Pb measured at sampling sites in SASP during 1990--1993 are presented. The weekly mean surface air concentrations of {sup 7}Be, {sup 95}Zr, {sup 137}Cs, {sup 144}Ce, and {sup 210}Pb for samples collected during 1990--1993 are given for 17 sites.

  2. EML Surface Air Sampling Program, 1990--1993 data

    International Nuclear Information System (INIS)

    Larsen, R.J.; Sanderson, C.G.; Kada, J.

    1995-11-01

    Measurements of the concentrations of specific atmospheric radionuclides in air filter samples collected for the Environmental Measurements Laboratory's Surface Air Sampling Program (SASP) during 1990--1993, with the exception of April 1993, indicate that anthropogenic radionuclides, in both hemispheres, were at or below the lower limits of detection for the sampling and analytical techniques that were used to collect and measure them. The occasional detection of 137 Cs in some air filter samples may have resulted from resuspension of previously deposited debris. Following the April 6, 1993 accident and release of radionuclides into the atmosphere at a reprocessing plant in the Tomsk-7 military nuclear complex located 16 km north of the Siberian city of Tomsk, Russia, weekly air filter samples from Barrow, Alaska; Thule, Greenland and Moosonee, Canada were selected for special analyses. The naturally occurring radioisotopes that the authors measure, 7 Be and 210 Pb, continue to be detected in most air filter samples. Variations in the annual mean concentrations of 7 Be at many of the sites appear to result primarily from changes in the atmospheric production rate of this cosmogenic radionuclide. Short-term variations in the concentrations of 7 Be and 210 Pb continued to be observed at many sites at which weekly air filter samples were analyzed. The monthly gross gamma-ray activity and the monthly mean surface air concentrations of 7 Be, 95 Zr, 137 Cs, 144 Ce, and 210 Pb measured at sampling sites in SASP during 1990--1993 are presented. The weekly mean surface air concentrations of 7 Be, 95 Zr, 137 Cs, 144 Ce, and 210 Pb for samples collected during 1990--1993 are given for 17 sites

  3. Experimental verification of photon: A program for use in x-ray shielding calculations

    International Nuclear Information System (INIS)

    Brauer, E.; Thomlinson, W.

    1987-01-01

    At the National Synchrotron Light Source, a computer program named PHOTON has been developed to calculate radiation dose values around a beam line. The output from the program must be an accurate guide to beam line shielding. To test the program, a series of measurements of radiation dose were carried out using existing beam lines; the results were compared to the theoretical calculations of PHOTON. Several different scattering geometries, scattering materials, and sets of walls and shielding materials were studied. Results of the measurements allowed many advances to be made in the program, ultimately resulting in good agreement between the theory and experiment. 3 refs., 6 figs

  4. KAPSIES: A program for the calculation of multi-step direct reaction cross sections

    International Nuclear Information System (INIS)

    Koning, A.J.; Akkermans, J.M.

    1994-09-01

    We present a program for the calculation of continuum cross sections, sepctra, angular distributions and analyzing powers according to various quantum-mechanical theories for statistical multi-step direct nuclear reactions. (orig.)

  5. CRYOCOL a computer program to calculate the cryogenic distillation of hydrogen isotopes

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1993-02-01

    This report describes the computer model and mathematical method coded into the AECL Research computer program CRYOCOL. The purpose of CRYOCOL is to calculate the separation of hydrogen isotopes by cryogenic distillation. (Author)

  6. Forward flux sampling calculation of homogeneous nucleation rates from aqueous NaCl solutions.

    Science.gov (United States)

    Jiang, Hao; Haji-Akbari, Amir; Debenedetti, Pablo G; Panagiotopoulos, Athanassios Z

    2018-01-28

    We used molecular dynamics simulations and the path sampling technique known as forward flux sampling to study homogeneous nucleation of NaCl crystals from supersaturated aqueous solutions at 298 K and 1 bar. Nucleation rates were obtained for a range of salt concentrations for the Joung-Cheatham NaCl force field combined with the Extended Simple Point Charge (SPC/E) water model. The calculated nucleation rates are significantly lower than the available experimental measurements. The estimates for the nucleation rates in this work do not rely on classical nucleation theory, but the pathways observed in the simulations suggest that the nucleation process is better described by classical nucleation theory than an alternative interpretation based on Ostwald's step rule, in contrast to some prior simulations of related models. In addition to the size of NaCl nucleus, we find that the crystallinity of a nascent cluster plays an important role in the nucleation process. Nuclei with high crystallinity were found to have higher growth probability and longer lifetimes, possibly because they are less exposed to hydration water.

  7. Study on the output from programs in calculating lattice with transverse coupling

    International Nuclear Information System (INIS)

    Xu Jianming

    1994-01-01

    SYNCH and MAD outputs in calculating lattice with coordinate rotation have been studied. The result shows that the four dispersion functions given by SYNCH output in this case are wrong. There are large discrepancies between the Twiss Parameters given by these two programs. One has to be careful in using these programs to calculate or match lattices with coordinate rotations (coupling between two transverse motions) so that to avoid wrong results

  8. FISPRO: a simplified computer program for general fission product formation and decay calculations

    International Nuclear Information System (INIS)

    Jiacoletti, R.J.; Bailey, P.G.

    1979-08-01

    This report describes a computer program that solves a general form of the fission product formation and decay equations over given time steps for arbitrary decay chains composed of up to three nuclides. All fission product data and operational history data are input through user-defined input files. The program is very useful in the calculation of fission product activities of specific nuclides for various reactor operational histories and accident consequence calculations

  9. SAMPLE RESULTS FROM THE INTEGRATED SALT DISPOSITION PROGRAM MACROBATCH 4 TANK 21H QUALIFICATION SAMPLES

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T.; Fink, S.

    2011-06-22

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H to qualify them for use in the Integrated Salt Disposition Program (ISDP) Batch 4 processing. All sample results agree with expectations based on prior analyses where available. No issues with the projected Salt Batch 4 strategy are identified. This revision includes additional data points that were not available in the original issue of the document, such as additional plutonium results, the results of the monosodium titanate (MST) sorption test and the extraction, scrub strip (ESS) test. This report covers the revision to the Tank 21H qualification sample results for Macrobatch (Salt Batch) 4 of the Integrated Salt Disposition Program (ISDP). A previous document covers initial characterization which includes results for a number of non-radiological analytes. These results were used to perform aluminum solubility modeling to determine the hydroxide needs for Salt Batch 4 to prevent the precipitation of solids. Sodium hydroxide was then added to Tank 21 and additional samples were pulled for the analyses discussed in this report. This work was specified by Task Technical Request and by Task Technical and Quality Assurance Plan (TTQAP).

  10. Aerosol sampling and Transport Efficiency Calculation (ASTEC) and application to surtsey/DCH aerosol sampling system: Code version 1.0: Code description and user's manual

    International Nuclear Information System (INIS)

    Yamano, N.; Brockmann, J.E.

    1989-05-01

    This report describes the features and use of the Aerosol Sampling and Transport Efficiency Calculation (ASTEC) Code. The ASTEC code has been developed to assess aerosol transport efficiency source term experiments at Sandia National Laboratories. This code also has broad application for aerosol sampling and transport efficiency calculations in general as well as for aerosol transport considerations in nuclear reactor safety issues. 32 refs., 31 figs., 7 tabs

  11. Program system for calculating streaming neutron radiation field in reactor cavity

    International Nuclear Information System (INIS)

    He Zhongliang; Zhao Shu.

    1986-01-01

    The A23 neutron albedo data base based on Monte Carlo method well agrees with SAIL albedo data base. RSCAM program system, using Monte Carlo method with albedo approach, is used to calculate streaming neutron radiation field in reactor cavity and containment operating hall. The dose rate distributions calculated with RSCAM in square concrete duct well agree with experiments

  12. Computer program FPIP-REV calculates fission product inventory for U-235 fission

    Science.gov (United States)

    Brown, W. S.; Call, D. W.

    1967-01-01

    Computer program calculates fission product inventories and source strengths associated with the operation of U-235 fueled nuclear power reactor. It utilizes a fission-product nuclide library of 254 nuclides, and calculates the time dependent behavior of the fission product nuclides formed by fissioning of U-235.

  13. MP.EXE, a Calculation Program for Pressure Reciprocity Calibration of Microphones

    DEFF Research Database (Denmark)

    Rasmussen, Knud

    1998-01-01

    A computer program is described which calculates the pressure sensitivity of microphones based on measurements of the electrical transfer impedance in a reciprocity calibration set-up. The calculations are performed according to the International Standard IEC 6194-2. In addition a number of options...

  14. Procedure for obtaining neutron diffusion coefficients from neutron transport Monte Carlo calculations (AWBA Development Program)

    International Nuclear Information System (INIS)

    Gast, R.C.

    1981-08-01

    A procedure for defining diffusion coefficients from Monte Carlo calculations that results in suitable ones for use in neutron diffusion theory calculations is not readily obtained. This study provides a survey of the methods used to define diffusion coefficients from deterministic calculations and provides a discussion as to why such traditional methods cannot be used in Monte Carlo. This study further provides the empirical procedure used for defining diffusion coefficients from the RCP01 Monte Carlo program

  15. COMPUTER PROGRAM FOR CALCULATION MICROCHANNEL HEAT EXCHANGERS FOR AIR CONDITIONING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Olga V. Olshevska

    2016-08-01

    Full Text Available Creating a computer program to calculate microchannel air condensers to reduce design time and carrying out variant calculations. Software packages for thermophysical properties of the working substance and the coolant, the correlation equation for calculating heat transfer, aerodynamics and hydrodynamics, the thermodynamic equations for the irreversible losses and their minimization in the heat exchanger were used in the process of creating. Borland Delphi 7 is used for creating software package.

  16. Thermal Hydraulic Fortran Program for Steady State Calculations of Plate Type Fuel Research Reactors

    International Nuclear Information System (INIS)

    Khedr, H.

    2008-01-01

    The safety assessment of Research and Power Reactors is a continuous process over their life and that requires verified and validated codes. Power Reactor codes all over the world are well established and qualified against a real measuring data and qualified experimental facilities. These codes are usually sophisticated, require special skills and consume much more running time. On the other hand, most of the Research Reactor codes still requiring more data for validation and qualification. Therefore it is benefit for a regulatory body and the companies working in the area of Research Reactor assessment and design to have their own program that give them a quick judgment. The present paper introduces a simple one dimensional Fortran program called THDSN for steady state best estimate Thermal Hydraulic (TH) calculations of plate type fuel RRs. Beside calculating the fuel and coolant temperature distribution and pressure gradient in an average and hot channel the program calculates the safety limits and margins against the critical phenomena encountered in RR such as the burnout heat flux and the onset of flow instability. Well known TH correlations for calculating the safety parameters are used. THDSN program is verified by comparing its results for 2 and 10 MW benchmark reactors with that published in IAEA publications and good agreement is found. Also the program results are compared with those published for other programs such as PARET and TERMIC. An extension for this program is underway to cover the transient TH calculations

  17. Development of a program for calculating the cells of heavy water

    International Nuclear Information System (INIS)

    Calabrese, R.; Lerner, A.M.; Notari, C.

    1991-01-01

    We describe here a methodology to solve the transport equation i cluster-type fuel cells found in PHWR. The general idea is inspired in the English lattice code WIMS-D4 and associated library even if we have introduced innovations both in structure and contents. The different steps involved are the resonant calculation and the subsequent construction of the microscopic self-shielded cross sections for each isotope; the calculation of macroscopic cross sections per material and the condensation to a broader energy structure; the solution of the two dimensional discretized transport equation for the whole cell. The next step is the inclusion of a burn up routine. A program, ALFIN, was written in FORTRAN 77, and prepared in a modular structure. A sample problem is tested and ALFIN results compared to those produce by WIMS-D4. The discrepancies observed are negligible, except for the resonant region where the methods are different and in some aspect WIMS is clearly in error. (author)

  18. Temporally stratified sampling programs for estimation of fish impingement

    International Nuclear Information System (INIS)

    Kumar, K.D.; Griffith, J.S.

    1977-01-01

    Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species

  19. ALBEMO, a program for the calculation of the radiation transport in void volumes with reflecting walls

    International Nuclear Information System (INIS)

    Mueller, K.; Vossebrecker, H.

    The Monte Carlo Program ALBEMO calculates the distribution of neutrons and gamma rays in void volumes which are bounded by reflecting walls with x, y, z coordinates. The program is based on the albedo method. The effect of significant simplifying assumptions is investigated. Comparisons with experiments show satisfying agreement

  20. The Influence of Using TI-84 Calculators with Programs on Algebra I High Stakes Examinations

    Science.gov (United States)

    Spencer, Misty

    2013-01-01

    The purpose of this study was to determine if there was a significant difference in scores on the Mississippi Algebra I SATP2 when one group was allowed to use programs and the other group was not allowed to use programs on TI-84 calculators. An additional purpose of the study was also to determine if there was a significant difference in the…

  1. SCMAG series of programs for calculating superconducting dipole and quadrupole magnets

    International Nuclear Information System (INIS)

    Green, M.A.

    1974-01-01

    A general description is given of four computer programs for calculating the characteristics of superconducting magnets used in the bending and focusing of high-energy particle beams. The programs are being used in the design of magnets for the LBL ESCAR (Experimental Superconducting Accelerator Ring) accelerator. (U.S.)

  2. Computer program for calculation of complex chemical equilibrium compositions and applications. Supplement 1: Transport properties

    Science.gov (United States)

    Gordon, S.; Mcbride, B.; Zeleznik, F. J.

    1984-01-01

    An addition to the computer program of NASA SP-273 is given that permits transport property calculations for the gaseous phase. Approximate mixture formulas are used to obtain viscosity and frozen thermal conductivity. Reaction thermal conductivity is obtained by the same method as in NASA TN D-7056. Transport properties for 154 gaseous species were selected for use with the program.

  3. WAD, a program to calculate the heat produced by alpha decay

    International Nuclear Information System (INIS)

    Jarvis, R.G.; Bretzlaff, C.I.

    1982-09-01

    The FORTRAN program WAD (Watts from Alpha Decay) deals with the alpha and beta decay chains to be encountered in advanced fuel cycles for CANDU reactors. The data library covers all necessary alpha-emitting and beta-emitting nuclides and the program calculates the heat produced by alpha decay. Any permissible chain can be constructed very simply

  4. The Brine Sampling and Evaluation Program (PSEP) at WIPP

    International Nuclear Information System (INIS)

    Deal, D.E.; Roggenthen, W.M.

    1989-01-01

    The Permian salt beds of the WIPP facility are virtually dry. The amount of water present in the rocks exposed in the excavations that is free to migrate under pressure gradients was estimated by heating salt samples to 95 degrees C and measuring weight loss. Clear balite contains about 0.22 weight percent water and the more argillaceous units average about 0.75 percent. Measurements made since 1984 as part of the Brine Sampling and Evaluation Program (BSEP) indicate that small amounts of this brine can migrate into the excavations and does accumulate in the underground environment. Brine seepage into drillholes monitored since thy were drilled show that brine seepage decreases with time and that many have dried up entirely. Weeping of brine from the walls of the repository excavations also decreases after two or more years. Chemical analyses of brines shows that they are sodium-chloride saturated and magnesium-rich

  5. A finite element computer program for the calculation of the resonant frequencies of anisotropic materials

    International Nuclear Information System (INIS)

    Fleury, W.H.; Rosinger, H.E.; Ritchie, I.G.

    1975-09-01

    A set of computer programs for the calculation of the flexural and torsional resonant frequencies of rectangular section bars of materials of orthotropic or higher symmetry are described. The calculations are used in the experimental determination and verification of the elastic constants of anisotropic materials. The simple finite element technique employed separates the inertial and elastic properties of the beam element into station and field transfer matrices respectively. It includes the Timoshenko beam corrections for flexure and Lekhnitskii's theory for torsion-flexure coupling. The programs also calculate the vibration shapes and surface nodal contours or Chladni figures of the vibration modes. (author)

  6. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  7. Preparation of a program for the independent verification of the brachytherapy planning systems calculations

    International Nuclear Information System (INIS)

    V Carmona, V.; Perez-Calatayud, J.; Lliso, F.; Richart Sancho, J.; Ballester, F.; Pujades-Claumarchirant, M.C.; Munoz, M.

    2010-01-01

    In this work a program is presented that independently checks for each patient the treatment planning system calculations in low dose rate, high dose rate and pulsed dose rate brachytherapy. The treatment planning system output text files are automatically loaded in this program in order to get the source coordinates, the desired calculation point coordinates and the dwell times when it is the case. The source strength and the reference dates are introduced by the user. The program allows implementing the recommendations about independent verification of the clinical brachytherapy dosimetry in a simple and accurate way, in few minutes. (Author).

  8. DWPI: a computer program to calculate the inelastic scattering of pions from nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, R A; Miller, G A [Carnegie-Mellon Univ., Pittsburgh, Pa. (USA). Dept. of Physics

    1976-02-01

    Angular distributions for the inelastic scattering of pions are generated using the distorted wave impulse approximation (DWIA). The cross section for a given transition is calculated by summing a partial wave expansion. The T-matrix elements are calculated using distorted pion waves from the program PIRK, and therefore include elastic scattering to all orders. The excitation is treated in first order only. Several optical potentials and nuclear densities are available in the program. The transition form factor may be uncoupled from the ground-state density. Coulomb excitation, which interferes coherently with the strong interaction, is a program option.

  9. Comparison of the results of radiation transport calculation obtained by means of different programs

    International Nuclear Information System (INIS)

    Gorbatkov, D.V.; Kruchkov, V.P.

    1995-01-01

    Verification of calculational results of radiation transport, obtained by the known, programs and constant libraries (MCNP+ENDF/B, ANISN+HILO, FLUKA92) by means of their comparison with the precision results calculations through ROZ-6N+Sadko program constant complex and with experimental data, is carried out. Satisfactory agreement is shown with the MCNP+ENDF/B package data for the energy range of E<14 MeV. Analysis of the results derivations, obtained trough the ANISN-HILO package for E<400 MeV and the FLUKA92 programs of E<200 GeV is carried out. 25 refs., 12 figs., 3 tabs

  10. UNIDOSE - a computer program for the calculation of individual and collective doses from airborne radioactive pollutants

    International Nuclear Information System (INIS)

    Karlberg, O.; Schwartz, H.; Forssen, B.-H.; Marklund, J.-E.

    1979-01-01

    UNIDOSE is a program system for calculating the consequences of a radioactive release to the atmosphere. The program is applicable for computation of dispersion in a rnage of 0 - 50 km from the release point. The Gaussion plume model is used for calculating the external dose from activity in the atmosphere, on the ground and the internal dose via inhalation. Radioactive decay, as well as growth and decay of daughter products are accounted for. The influence of dry deposition and wash-out are also considered. It is possible to treat time-dependent release-rates of 1 - 24 hours duration and constant release-rates for up to one year. The program system also contains routines for the calculation of collective dose and health effects. The system operates in a statistical manner. Many weather-situations, based on measured data, can be analysed and statistical properties, such as cumulative frequences, can be calculated. (author)

  11. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  12. Calculation of upper confidence bounds on not-sampled vegetation types using a systematic grid sample: An application to map unit definition for existing vegetation maps

    Science.gov (United States)

    Paul L. Patterson; Mark Finco

    2009-01-01

    This paper explores the information FIA data can produce regarding forest types that were not sampled and develops the equations necessary to define the upper confidence bounds on not-sampled forest types. The problem is reduced to a Bernoulli variable. This simplification allows the upper confidence bounds to be calculated based on Cochran (1977). Examples are...

  13. Weighted curve-fitting program for the HP 67/97 calculator

    International Nuclear Information System (INIS)

    Stockli, M.P.

    1983-01-01

    The HP 67/97 calculator provides in its standard equipment a curve-fit program for linear, logarithmic, exponential and power functions that is quite useful and popular. However, in more sophisticated applications, proper weights for data are often essential. For this purpose a program package was created which is very similar to the standard curve-fit program but which includes the weights of the data for proper statistical analysis. This allows accurate calculation of the uncertainties of the fitted curve parameters as well as the uncertainties of interpolations or extrapolations, or optionally the uncertainties can be normalized with chi-square. The program is very versatile and allows one to perform quite difficult data analysis in a convenient way with the pocket calculator HP 67/97

  14. Replica Exchange Gaussian Accelerated Molecular Dynamics: Improved Enhanced Sampling and Free Energy Calculation.

    Science.gov (United States)

    Huang, Yu-Ming M; McCammon, J Andrew; Miao, Yinglong

    2018-04-10

    Through adding a harmonic boost potential to smooth the system potential energy surface, Gaussian accelerated molecular dynamics (GaMD) provides enhanced sampling and free energy calculation of biomolecules without the need of predefined reaction coordinates. This work continues to improve the acceleration power and energy reweighting of the GaMD by combining the GaMD with replica exchange algorithms. Two versions of replica exchange GaMD (rex-GaMD) are presented: force constant rex-GaMD and threshold energy rex-GaMD. During simulations of force constant rex-GaMD, the boost potential can be exchanged between replicas of different harmonic force constants with fixed threshold energy. However, the algorithm of threshold energy rex-GaMD tends to switch the threshold energy between lower and upper bounds for generating different levels of boost potential. Testing simulations on three model systems, including the alanine dipeptide, chignolin, and HIV protease, demonstrate that through continuous exchanges of the boost potential, the rex-GaMD simulations not only enhance the conformational transitions of the systems but also narrow down the distribution width of the applied boost potential for accurate energetic reweighting to recover biomolecular free energy profiles.

  15. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.

  16. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    International Nuclear Information System (INIS)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach

  17. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    International Nuclear Information System (INIS)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon

    2014-01-01

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist

  18. Examinations on Applications of Manual Calculation Programs on Lung Cancer Radiation Therapy Using Analytical Anisotropic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Min; Kim, Dae Sup; Hong, Dong Ki; Back, Geum Mun; Kwak, Jung Won [Dept. of Radiation Oncology, , Seoul (Korea, Republic of)

    2012-03-15

    There was a problem with using MU verification programs for the reasons that there were errors of MU when using MU verification programs based on Pencil Beam Convolution (PBC) Algorithm with radiation treatment plans around lung using Analytical Anisotropic Algorithm (AAA). On this study, we studied the methods that can verify the calculated treatment plans using AAA. Using Eclipse treatment planning system (Version 8.9, Varian, USA), for each 57 fields of 7 cases of Lung Stereotactic Body Radiation Therapy (SBRT), we have calculated using PBC and AAA with dose calculation algorithm. By developing MU of established plans, we compared and analyzed with MU of manual calculation programs. We have analyzed relationship between errors and 4 variables such as field size, lung path distance of radiation, Tumor path distance of radiation, effective depth that can affect on errors created from PBC algorithm and AAA using commonly used programs. Errors of PBC algorithm have showned 0.2{+-}1.0% and errors of AAA have showned 3.5{+-}2.8%. Moreover, as a result of analyzing 4 variables that can affect on errors, relationship in errors between lung path distance and MU, connection coefficient 0.648 (P=0.000) has been increased and we could calculate MU correction factor that is A.E=L.P 0.00903+0.02048 and as a result of replying for manual calculation program, errors of 3.5{+-}2.8% before the application has been decreased within 0.4{+-}2.0%. On this study, we have learned that errors from manual calculation program have been increased as lung path distance of radiation increases and we could verified MU of AAA with a simple method that is called MU correction factor.

  19. Examinations on Applications of Manual Calculation Programs on Lung Cancer Radiation Therapy Using Analytical Anisotropic Algorithm

    International Nuclear Information System (INIS)

    Kim, Jung Min; Kim, Dae Sup; Hong, Dong Ki; Back, Geum Mun; Kwak, Jung Won

    2012-01-01

    There was a problem with using MU verification programs for the reasons that there were errors of MU when using MU verification programs based on Pencil Beam Convolution (PBC) Algorithm with radiation treatment plans around lung using Analytical Anisotropic Algorithm (AAA). On this study, we studied the methods that can verify the calculated treatment plans using AAA. Using Eclipse treatment planning system (Version 8.9, Varian, USA), for each 57 fields of 7 cases of Lung Stereotactic Body Radiation Therapy (SBRT), we have calculated using PBC and AAA with dose calculation algorithm. By developing MU of established plans, we compared and analyzed with MU of manual calculation programs. We have analyzed relationship between errors and 4 variables such as field size, lung path distance of radiation, Tumor path distance of radiation, effective depth that can affect on errors created from PBC algorithm and AAA using commonly used programs. Errors of PBC algorithm have showned 0.2±1.0% and errors of AAA have showned 3.5±2.8%. Moreover, as a result of analyzing 4 variables that can affect on errors, relationship in errors between lung path distance and MU, connection coefficient 0.648 (P=0.000) has been increased and we could calculate MU correction factor that is A.E=L.P 0.00903+0.02048 and as a result of replying for manual calculation program, errors of 3.5±2.8% before the application has been decreased within 0.4±2.0%. On this study, we have learned that errors from manual calculation program have been increased as lung path distance of radiation increases and we could verified MU of AAA with a simple method that is called MU correction factor.

  20. Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method

    International Nuclear Information System (INIS)

    Pudjijanto, M.S.; Akhmad, Y.R.

    1998-01-01

    A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media

  1. FLOWNET: A Computer Program for Calculating Secondary Flow Conditions in a Network of Turbomachinery

    Science.gov (United States)

    Rose, J. R.

    1978-01-01

    The program requires the network parameters, the flow component parameters, the reservoir conditions, and the gas properties as input. It will then calculate all unknown pressures and the mass flow rate in each flow component in the network. The program can treat networks containing up to fifty flow components and twenty-five unknown network pressures. The types of flow components that can be treated are face seals, narrow slots, and pipes. The program is written in both structured FORTRAN (SFTRAN) and FORTRAN 4. The program must be run in an interactive (conversational) mode.

  2. BUCKL: a program for rapid calculation of x-ray deposition

    International Nuclear Information System (INIS)

    Cole, R.K. Jr.

    1970-07-01

    A computer program is described which has the fast execution time of exponential codes but also evaluates the effects of fluorescence and scattering. The program makes use of diffusion calculations with a buckling correction included to approximate the effects of finite transverse geometry. Theory and derivations necessary for the BUCKL code are presented, and the code results are compared with those of earlier codes for a variety of problems. Inputs and outputs of the program are described, and a FORTRAN listing is provided. Shortcomings of the program are discussed and suggestions are provided for possible future improvement. (U.S.)

  3. Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Ferson, S. [Applied Biomathematics, Setauket, NY (United States)

    1996-12-31

    A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.

  4. Mechanistic Insights on Human Phosphoglucomutase Revealed by Transition Path Sampling and Molecular Dynamics Calculations.

    Science.gov (United States)

    Brás, Natércia F; Fernandes, Pedro A; Ramos, Maria J; Schwartz, Steven D

    2018-02-06

    Human α-phosphoglucomutase 1 (α-PGM) catalyzes the isomerization of glucose-1-phosphate into glucose-6-phosphate (G6P) through two sequential phosphoryl transfer steps with a glucose-1,6-bisphosphate (G16P) intermediate. Given that the release of G6P in the gluconeogenesis raises the glucose output levels, α-PGM represents a tempting pharmacological target for type 2 diabetes. Here, we provide the first theoretical study of the catalytic mechanism of human α-PGM. We performed transition-path sampling simulations to unveil the atomic details of the two catalytic chemical steps, which could be key for developing transition state (TS) analogue molecules with inhibitory properties. Our calculations revealed that both steps proceed through a concerted S N 2-like mechanism, with a loose metaphosphate-like TS. Even though experimental data suggests that the two steps are identical, we observed noticeable differences: 1) the transition state ensemble has a well-defined TS region and a late TS for the second step, and 2) larger coordinated protein motions are required to reach the TS of the second step. We have identified key residues (Arg23, Ser117, His118, Lys389), and the Mg 2+ ion that contribute in different ways to the reaction coordinate. Accelerated molecular dynamics simulations suggest that the G16P intermediate may reorient without leaving the enzymatic binding pocket, through significant conformational rearrangements of the G16P and of specific loop regions of the human α-PGM. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Use of the 'DRAGON' program for the calculation of reactivity devices

    International Nuclear Information System (INIS)

    Mollerach, Ricardo; Fink, Jose

    2003-01-01

    DRAGON is a computer program developed at the Ecole Polytechnique of the University of Montreal and adopted by AECL for the transport calculations associated to reactivity devices. This report presents aspects of the implementation in NASA of the DRAGON program. Some cases of interest were evaluated. Comparisons with results of known programs as WIMS D5, and with experiments were done. a) Embalse (CANDU 6) cell without burnup and leakage. Calculations of macroscopic cross sections with WIMS and DRAGON show very good agreement with smaller differences in the thermal constants. b) Embalse fresh cell with different leakage options. c) Embalse cell with leakage and burnup. A comparison of k-infinity and k-effective with WIMS and DRAGON as a function of burnup shows that the differences ((D-W)/D) for fresh fuel are -0.17% roughly constant up to about 2500 MWd/tU, and then decrease to -0.06 % for 8500 MWd/tU. Experiments made in 1977 in ZED-2 critical facility, reported in [3], were used as a benchmark for the cell and supercell DRAGON calculations. Calculated fluxes were compared with experimental values and the agreement is so good. d) ZED-2 cell calculation. The measured buckling was used as geometric buckling. This case can be considered an experimental verification. The calculated reactivity with DRAGON is about 2 mk, and can be considered satisfactory. WIMS k-effective value is about one mk higher. e) Supercell calculations for ZED-2 vertical and horizontal tube and rod adjuster using 2D and 3D models were done. Comparisons between measured and calculated fluxes in the vicinity of the adjuster rods. Incremental cross sections for these adjusters were calculated using different options. f) ZED-2 reactor calculations with PUMA reveal a good concordance with critical heights measured in experiments. The report describes also particular features of the code and recommendations regarding its use that may be useful for new users. (author)

  6. Brine Sampling and Evaluation Program: Phase 1 report

    International Nuclear Information System (INIS)

    Deal, D.E.; Case, J.B.

    1987-01-01

    This interim report presents preliminary data obtained in the course of the WIPP Brine Sampling and Evaluation Program. The investigations focus on the brine present in the near-field environment around the WIPP underground workings. Although the WIPP underground workings are considered dry, small amounts of brine are present. This amount of brine is not unexpected in rocks of marine sedimentary origin. Part of that brine can and does migrate into the repository in response to pressure gradients, at essentially isothermal conditions. These small volumes of brine have little effect on the day-to-day operations, but are pervasive throughout the repository and may contribute enough moisture over a period of years to affect resaturation and repressurization after sealing and closure. Gas bubbles are observed in many of the brine occurrences. Gas is also known to exsolve from solution as the brine is poured from container to container. 68 refs., 9 figs., 2 tabs

  7. DCHAIN: A user-friendly computer program for radioactive decay and reaction chain calculations

    International Nuclear Information System (INIS)

    East, L.V.

    1994-05-01

    A computer program for calculating the time-dependent daughter populations in radioactive decay and nuclear reaction chains is described. Chain members can have non-zero initial populations and be produced from the preceding chain member as the result of radioactive decay, a nuclear reaction, or both. As presently implemented, chains can contain up to 15 members. Program input can be supplied interactively or read from ASCII data files. Time units for half-lives, etc. can be specified during data entry. Input values are verified and can be modified if necessary, before used in calculations. Output results can be saved in ASCII files in a format suitable for including in reports or other documents. The calculational method, described in some detail, utilizes a generalized form of the Bateman equations. The program is written in the C language in conformance with current ANSI standards and can be used on multiple hardware platforms

  8. A program for the Calculation of the Correlated Colour Temperature. Application for Characterising Colour Changes in Glasses

    International Nuclear Information System (INIS)

    Garcia Rosillo, F.; Balenzategui, J. L.

    2000-01-01

    The purpose of this work is to present a program for the calculation of the Correlated Colour Temperature (CCT) of any source of radiation. The methodology of calculating the colour coordinates and the corresponding CCT value of any light source is briefly reviewed. Sample program codes, including one to obtain the colour candidatures of blackbody radiators at different temperatures, have been also Ust ed. This will allow to engineers and researchers to calculate and to obtain adequate solutions for their own illuminance problems. As an application example, the change in CCT values and colour coordinates of a reference spectrum when passing through semitransparent solar photovoltaic modules designed for building integration applications has been studied. This is used to evaluate the influence on the visual comfort of the building inner rooms. Several samples of different glass models used as covers in photovoltaic modules have been tested. Results show that all the samples tested do not modify substantially the initial characteristics of the sunlight, as otherwise expected. (Author) 5 refs

  9. SUBDOSA: a computer program for calculating external doses from accidental atmospheric releases of radionuclides

    International Nuclear Information System (INIS)

    Strenge, D.L.; Watson, E.C.; Houston, J.R.

    1975-06-01

    A computer program, SUBDOSA, was developed for calculating external γ and β doses to individuals from the accidental release of radionuclides to the atmosphere. Characteristics of SUBDOSA are: doses from both γ and β radiation are calculated as a function of depth in tissue, summed and reported as skin, eye, gonadal, and total body dose; doses are calculated for releases within each of several release time intervals and nuclide inventories and atmospheric dispersion conditions are considered for each time interval; radioactive decay is considered during the release and/or transit using a chain decay scheme with branching to account for transitions to and from isomeric states; the dose from gamma radiation is calculated using a numerical integration technique to account for the finite size of the plume; and the program computes and lists the normalized air concentrations at ground level as a function of distance from the point of release. (auth)

  10. A PC-program for the calculation of neutron flux and element contents using the ki-method of neutron activation analysis

    International Nuclear Information System (INIS)

    Boulyga, E.G.; Boulyga, S.F.

    2000-01-01

    A computer program is described, which calculates the induced activities of isotopes after irradiation in a known neutron field, thermal and epithermal neutron fluxes from the measured induced activities and from nuclear data of 2-4 monitor nuclides as well as the element concentrations in samples irradiated together with the monitors. The program was developed for operation in Windows 3.1 (or higher). The application of the program for neutron activation analysis allows to simplify the experimental procedure and to reduce the time. The program was tested by measuring different types of standard reference materials at the FRJ-2 (Research Centre, Juelich, Germany) and Triga (University Mainz, Germany) reactors. Comparison of neutron flux parameters calculated by this program with those calculated by a VAX program developed at the Research Centre, Juelich was done. The results of testing seem to be satisfactory. (author)

  11. TRAFIC, a computer program for calculating the release of metallic fission products from an HTGR core

    International Nuclear Information System (INIS)

    Smith, P.D.

    1978-02-01

    A special purpose computer program, TRAFIC, is presented for calculating the release of metallic fission products from an HTGR core. The program is based upon Fick's law of diffusion for radioactive species. One-dimensional transient diffusion calculations are performed for the coated fuel particles and for the structural graphite web. A quasi steady-state calculation is performed for the fuel rod matrix material. The model accounts for nonlinear adsorption behavior in the fuel rod gap and on the coolant hole boundary. The TRAFIC program is designed to operate in a core survey mode; that is, it performs many repetitive calculations for a large number of spatial locations in the core. This is necessary in order to obtain an accurate volume integrated release. For this reason the program has been designed with calculational efficiency as one of its main objectives. A highly efficient numerical method is used in the solution. The method makes use of the Duhamel superposition principle to eliminate interior spatial solutions from consideration. Linear response functions relating the concentrations and mass fluxes on the boundaries of a homogeneous region are derived. Multiple regions are numerically coupled through interface conditions. Algebraic elimination is used to reduce the equations as far as possible. The problem reduces to two nonlinear equations in two unknowns, which are solved using a Newton Raphson technique

  12. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis

    Science.gov (United States)

    Gordon, Sanford; Mcbride, Bonnie J.

    1994-01-01

    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  13. FUP1--an unified program for calculating all fast neutron data of fissile nucleus

    International Nuclear Information System (INIS)

    Cai Chonghai; Zuo Yixin

    1990-01-01

    FUP1 is the first edition of an unified program for calculating all the fast neutron data in ENDF/B-4 format for fissile nucleus. Following data are calculated with FUP1 code: the total cross section, elastic scattering cross section, nonelastic cross section, total including up to 40 isolated levels and continuum state inelastic cross sections. In FUP1 the energy region of incident neutron is restricted to 10 Kev to 20 Mev. The advantages of this program are its perfect function, convenient to users and running very fast

  14. Super Phenix. Monitoring of structures subject to irradiation. Neutron dosimetry measurement and calculation program

    International Nuclear Information System (INIS)

    Cabrillat, J.C.; Arnaud, G.; Calamand, D.; Manent, G.; Tavassoli, A.A.

    1984-09-01

    For the Super Phenix reactor, the evolution, versus the irradiation of the mechanical properties of the core diagrid steel is the object of studies and is particularly monitored. The specimens irradiated, now in PHENIX and will be later irradiated in SUPER PHENIX as soon as the first operating cycles. An important dosimetry program coupling calculation and measurement, is parallely carried out. This paper presents the reasons, the definition of the structure, of the development and of materials used in this program of dosimetry, as also the first results of a calculation-measurement comparison [fr

  15. A computer program to calculate the committed dose equivalent after the inhalation of radioactivity

    International Nuclear Information System (INIS)

    Van der Woude, S.

    1989-03-01

    A growing number of people are, as part of their occupation, at risk of being exposed to radiation originating from sources inside their bodies. The quantification of this exposure is an important part of health physics. The International Commission on Radiological Protection (ICRP) developed a first-order kinetics compartmental model to determine the transport of radioactive material through the human body. The model and the parameters involved in its use, are discussed. A versatile computer program was developed to do the following after the in vivo measurement of either the organ- or whole-body activity: calculate the original amount of radioactive material which was inhaled (intake) by employing the ICRP compartmental model of the human body; compare this intake to calculated reference levels and state any action to be taken for the case under consideration; calculate the committed dose equivalent resulting from this intake. In the execution of the above-mentioned calculations, the computer program makes provision for different aerosol particle sizes and the effect of previous intakes. Model parameters can easily be changed to take the effects of, for instance, medical intervention into account. The computer program and the organization of the data in the input files are such that the computer program can be applied to any first-order kinetics compartmental model. The computer program can also conveniently be used for research on problems related to the application of the ICRP model. 18 refs., 25 figs., 5 tabs

  16. Development of a program for calculation of second dose and securities in brachytherapy high dose rate

    International Nuclear Information System (INIS)

    Esteve Sanchez, S.; Martinez Albaladejo, M.; Garcia Fuentes, J. D.; Bejar Navarro, M. J.; Capuz Suarez, B.; Moris de Pablos, R.; Colmenares Fernandez, R.

    2015-01-01

    We assessed the reliability of the program with 80 patients in the usual points of prescription of each pathology. The average error of the calculation points is less than 0.3% in 95% of cases, finding the major differences in the axes of the applicators (maximum error -0.798%). The program has proved effective previously testing him with erroneous dosimetry. Thanks to the implementation of this program is achieved by the calculation of the dose and part of the process of quality assurance program in a few minutes, highlighting the case of HDR prostate due to having a limited time. Having separate data sheet allows each institution to its protocols modify parameters. (Author)

  17. Calculation of pressure distribution in vacuum systems using a commercial finite element program

    International Nuclear Information System (INIS)

    Howell, J.; Wehrle, B.; Jostlein, H.

    1991-01-01

    The finite element method has proven to be a very useful tool for calculating pressure distributions in complex vacuum systems. A number of finite element programs have been developed for this specific task. For those who do not have access to one of these specialized programs and do not wish to develop their own program, another option is available. Any commercial finite element program with heat transfer analysis capabilities can be used to calculate pressure distributions. The approach uses an analogy between thermal conduction and gas conduction with the quantity temperature substituted for pressure. The thermal analogies for pumps, gas loads and tube conductances are described in detail. The method is illustrated for an example vacuum system. A listing of the ANSYS data input file for this example is included. 2 refs., 4 figs., 1 tab

  18. Programs for data processing in radioimmunoassay using the HP-41C programmable calculator

    International Nuclear Information System (INIS)

    1981-09-01

    The programs described provide for analysis, with the Hewlett Packard HP-41C calculator, of counting data collected in radioimmunoassays or other related in-vitro assays. The immediate reason for their development was to assist laboratories having limited financial resources and serious problems of quality control. The programs are structured both for ''off-line'' use, with manual entry of counting data into the calculator through the keyboard, and, in a slightly altered version, for ''on-line'' use, with automatic data entry from an automatic well scintillation counter originally designed at the IAEA. Only the off-line variant of the programs is described. The programs determine from appropriate counting data the concentration of analyte in unknown specimens, and provide supplementary information about the reliability of these results and the consistency of current and past assay performance

  19. Neutronic calculations for JET. Performed with the FURNACE2 program. (Final report JET contract JEO/9004)

    International Nuclear Information System (INIS)

    Verschuur, K.A.

    1996-10-01

    Neutron-transport calculations with the FURNACE(2) program system, in support of the Neutron Diagnostic Group at JET, have been performed since 1980, i.e. since the construction phase of JET. FURNACE(2) is a ray-tracing/multiple-reflection transport program system for toroidal geometries, that orginally was developed for blanket neutronics studies and which then was improved and extended for application to the neutron-diagnostics at JET. (orig./WL)

  20. Efigie: a computer program for calculating end-isotope accumulation by neutron irradiation and radioactive decay

    International Nuclear Information System (INIS)

    Ropero, M.

    1978-01-01

    Efigie is a program written in Fortran V which can calculate the concentration of radionuclides produced by neutron irradiation of a target made of either a single isotope or several isotopes. The program includes optimization criteria that can be applied when the goal is the production of a single nuclide. The effect of a cooling time before chemical processing of the target is also accounted for.(author) [es

  1. GRUCAL: a program system for the calculation of macroscopic group constants

    International Nuclear Information System (INIS)

    Woll, D.

    1984-01-01

    Nuclear reactor calculations require material- and composition-dependent, energy-averaged neutron physical data in order to decribe the interaction between neutrons and isotopes. The multigroup cross section code GRUCAL calculates these macroscopic group constants for given material compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but are read in from an instruction file. This makes it possible to adapt GRUCAL to various problems or different group constant concepts

  2. PTOLEMY, a program for heavy-ion direction-reaction calculations

    International Nuclear Information System (INIS)

    Gloeckner, D.H.; Macfarlane, M.H.; Pieper, S.C.

    1976-03-01

    Ptolemy is an IBM/360 program for the computation of nuclear elastic and direct-reaction cross sections. It carries out both optical-model fits to elastic-scattering data at one or more energies, and DWBA calculations for nucleon-transfer reactions. Ptolemy has been specifically designed for heavy-ion calculations. It is fast and does not require large amounts of core. The input is exceptionally flexible and easy to use. This report outlines the types of calculation that Ptolemy can carry out, summarizes the formulas used, and gives a detailed description of its input

  3. Ptolemy: a program for heavy-ion direct-reaction calculations

    International Nuclear Information System (INIS)

    Macfarlane, M.H.; Pieper, S.C.

    1978-04-01

    Ptolemy is an IBM/360 program for the computation of nuclear elastic and direct-reaction cross sections. It carries out optical-model fits to elastic-scattering data at one or more energies and for one or more combinations of projectile and target, collective model DWBA calculations of excitation processes, and finite-range DWBA calculations of nucleon-transfer reactions. It is fast and does not require large amounts of core. The input is exceptionally flexible and easy to use. The types of calculations that Ptolemy can carry out are outlined, the formulas used are summarized, and a detailed description of its input is given

  4. The Navy/NASA Engine Program (NNEP89): Interfacing the program for the calculation of complex Chemical Equilibrium Compositions (CEC)

    Science.gov (United States)

    Gordon, Sanford

    1991-01-01

    The NNEP is a general computer program for calculating aircraft engine performance. NNEP has been used extensively to calculate the design and off-design (matched) performance of a broad range of turbine engines, ranging from subsonic turboprops to variable cycle engines for supersonic transports. Recently, however, there has been increased interest in applications for which NNEP is not capable of simulating, such as the use of alternate fuels including cryogenic fuels and the inclusion of chemical dissociation effects at high temperatures. To overcome these limitations, NNEP was extended by including a general chemical equilibrium method. This permits consideration of any propellant system and the calculation of performance with dissociation effects. The new extended program is referred to as NNEP89.

  5. Hyperfine electric parameters calculation in Si samples irradiated with 57Mn

    International Nuclear Information System (INIS)

    Abreu, Y.; Cruz, C. M.; Pinnera, I.; Leyva, A.; Van Espen, P.; Perez, C.

    2011-01-01

    The radiation damage created in silicon crystalline material by 57 Mn→ 57 Fe ion implantation was characterized by Moessbauer spectroscopy showing three main lines, assigned to: substitutional, interstitial and a damage configuration sites of the implanted ions. The hyperfine electric parameters, Quadrupole Splitting and Isomer Shift, were calculated for various implantation environments. In the calculations the full potential linearized-augmented plane-wave plus local orbitals (L/APW+lo) method as embodied in the WIEN2k code was used. Good agreement was found between the experimental and the calculated values for some implantation configurations; suggesting that the implantation environments could be similar to the ones proposed by the authors. (Author)

  6. Fast patient-specific Monte Carlo brachytherapy dose calculations via the correlated sampling variance reduction technique

    Energy Technology Data Exchange (ETDEWEB)

    Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2012-02-15

    Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and

  7. DNAStat, version 2.1--a computer program for processing genetic profile databases and biostatistical calculations.

    Science.gov (United States)

    Berent, Jarosław

    2010-01-01

    This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.

  8. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  9. Guidance for establishment and implementation of a national sample management program in support of EM environmental sampling and analysis activities

    International Nuclear Information System (INIS)

    1994-01-01

    The role of the National Sample Management Program (NSMP) proposed by the Department of Energy's Office of Environmental Management (EM) is to be a resource for EM programs and for local Field Sample Management Programs (FSMPs). It will be a source of information on sample analysis and data collection within the DOE complex. Therefore the NSMP's primary role is to coordinate and function as a central repository for information collected from the FSMPs. An additional role of the NSMP is to monitor trends in data collected from the FSMPs over time and across sites and laboratories. Tracking these trends will allow identification of potential problems in the sampling and analysis process

  10. Sample Results from the Interim Salt Disposition Program Macrobatch 8 Tank 21H Qualification Samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Washington, A. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-01-01

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Macrobatch (Salt Batch) 8 for the Interim Salt Disposition Program (ISDP). An Actinide Removal Process (ARP) and several Extraction-Scrub- Strip (ESS) tests were also performed. This document reports characterization data on the samples of Tank 21H as well as simulated performance of ARP and the Modular Caustic Side Solvent Extraction (CSSX) Unit (MCU). No issues with the projected Salt Batch 8 strategy are identified. A demonstration of the monosodium titanate (MST) (0.2 g/L) removal of strontium and actinides provided acceptable average decontamination factors for plutonium of 2.62 (4 hour) and 2.90 (8 hour); and average strontium decontamination factors of 21.7 (4 hour) and 21.3 (8 hour). These values are consistent with results from previous salt batch ARP tests. The two ESS tests also showed acceptable performance with extraction distribution ratios (D(Cs)) values of 52.5 and 50.4 for the Next Generation Solvent (NGS) blend (from MCU) and NGS (lab prepared), respectively. These values are consistent with results from previous salt batch ESS tests. Even though the performance is acceptable, SRNL recommends that a model for predicting extraction behavior for cesium removal for the blended solvent and NGS be developed in order to improve our predictive capabilities for the ESS tests.

  11. Sample results from the Interim Salt Disposition Program Macrobatch 8 Tank 21H qualification samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Washington, II, A. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-01-13

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Macrobatch (Salt Batch) 8 for the Interim Salt Disposition Program (ISDP). An Actinide Removal Process (ARP) and several Extraction-Scrub-Strip (ESS) tests were also performed. This document reports characterization data on the samples of Tank 21H as well as simulated performance of ARP and the Modular Caustic Side Solvent Extraction (CSSX) Unit (MCU). No issues with the projected Salt Batch 8 strategy are identified. A demonstration of the monosodium titanate (MST) (0.2 g/L) removal of strontium and actinides provided acceptable average decontamination factors for plutonium of 2.62 (4 hour) and 2.90 (8 hour); and average strontium decontamination factors of 21.7 (4 hour) and 21.3 (8 hour). These values are consistent with results from previous salt batch ARP tests. The two ESS tests also showed acceptable performance with extraction distribution ratios (D(Cs)) values of 52.5 and 50.4 for the Next Generation Solvent (NGS) blend (from MCU) and NGS (lab prepared), respectively. These values are consistent with results from previous salt batch ESS tests. Even though the performance is acceptable, SRNL recommends that a model for predicting extraction behavior for cesium removal for the blended solvent and NGS be developed in order to improve our predictive capabilities for the ESS tests.

  12. Guidance for establishment and implementation of field sample management programs in support of EM environmental sampling and analysis activities

    International Nuclear Information System (INIS)

    1994-01-01

    The role of the National Sample Management Program (NSMP) proposed by the Department of Energy's Office of Environmental Management (EM) is to be a resource for EM programs and for local Field Sample Management Programs (FSMPs). It will be a source of information on sample analysis and data collection within the DOE complex. The purpose of this document is to establish the suggested scope of the FSMP activities to be performed under each Operations Office, list the drivers under which the program will operate, define terms and list references. This guidance will apply only to EM sampling and analysis activities associated with project planning, contracting, laboratory selection, sample collection, sample transportation, laboratory analysis and data management

  13. Sample Results From The Interim Salt Disposition Program Macrobatch 7 Tank 21H Qualification Samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B.; Washington, A. L. II

    2013-08-08

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of qualification of Macrobatch (Salt Batch) 7 for the Interim Salt Disposition Program (ISDP). An ARP and several ESS tests were also performed. This document reports characterization data on the samples of Tank 21H as well as simulated performance of ARP/MCU. No issues with the projected Salt Batch 7 strategy are identified, other than the presence of visible quantities of dark colored solids. A demonstration of the monosodium titanate (0.2 g/L) removal of strontium and actinides provided acceptable 4 hour average decontamination factors for Pu and Sr of 3.22 and 18.4, respectively. The Four ESS tests also showed acceptable behavior with distribution ratios (D(Cs)) values of 15.96, 57.1, 58.6, and 65.6 for the MCU, cold blend, hot blend, and Next Generation Solvent (NGS), respectively. The predicted value for the MCU solvent was 13.2. Currently, there are no models that would allow a prediction of extraction behavior for the other three solvents. SRNL recommends that a model for predicting extraction behavior for cesium removal for the blended solvent and NGS be developed. While no outstanding issues were noted, the presence of solids in the samples should be investigated in future work. It is possible that the solids may represent a potential reservoir of material (such as potassium) that could have an impact on MCU performance if they were to dissolve back into the feed solution. This salt batch is intended to be the first batch to be processed through MCU entirely using the new NGS-MCU solvent.

  14. Are the program packages for molecular structure calculations really black boxes?

    Directory of Open Access Journals (Sweden)

    ANA MRAKOVIC

    2007-12-01

    Full Text Available In this communication it is shown that the widely held opinion that compact program packages for quantum–mechanical calculations of molecular structure can safely be used as black boxes is completely wrong. In order to illustrate this, the results of computations of equilibrium bond lengths, vibrational frequencies and dissociation energies for all homonuclear diatomic molecules involving the atoms from the first two rows of the Periodic Table, performed using the Gaussian program package are presented. It is demonstrated that the sensible use of the program requires a solid knowledge of quantum chemistry.

  15. 'PRIZE': A program for calculating collision probabilities in R-Z geometry

    International Nuclear Information System (INIS)

    Pitcher, H.H.W.

    1964-10-01

    PRIZE is an IBM7090 program which computes collision probabilities for systems with axial symmetry and outputs them on cards in suitable format for the PIP1 program. Its method of working, data requirements, output, running time and accuracy are described. The program has been used to compute non-escape (self-collision) probabilities of finite circular cylinders, and a table is given by which non-escape probabilities of slabs, finite and infinite circular cylinders, infinite square cylinders, cubes, spheres and hemispheres may quickly be calculated to 1/2% or better. (author)

  16. 'PRIZE': A program for calculating collision probabilities in R-Z geometry

    Energy Technology Data Exchange (ETDEWEB)

    Pitcher, H.H.W. [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1964-10-15

    PRIZE is an IBM7090 program which computes collision probabilities for systems with axial symmetry and outputs them on cards in suitable format for the PIP1 program. Its method of working, data requirements, output, running time and accuracy are described. The program has been used to compute non-escape (self-collision) probabilities of finite circular cylinders, and a table is given by which non-escape probabilities of slabs, finite and infinite circular cylinders, infinite square cylinders, cubes, spheres and hemispheres may quickly be calculated to 1/2% or better. (author)

  17. REITP3-Hazard evaluation program for heat release based on thermochemical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Akutsu, Yoshiaki.; Tamura, Masamitsu. [The University of Tokyo, Tokyo (Japan). School of Engineering; Kawakatsu, Yuichi. [Oji Paper Corp., Tokyo (Japan); Wada, Yuji. [National Institute for Resources and Environment, Tsukuba (Japan); Yoshida, Tadao. [Hosei University, Tokyo (Japan). College of Engineering

    1999-06-30

    REITP3-A hazard evaluation program for heat release besed on thermochemical calculation has been developed by modifying REITP2 (Revised Estimation of Incompatibility from Thermochemical Properties{sup 2)}. The main modifications are as follows. (1) Reactants are retrieved from the database by chemical formula. (2) As products are listed in an external file, the addition of products and change in order of production can be easily conducted. (3) Part of the program has been changed by considering its use on a personal computer or workstation. These modifications will promote the usefulness of the program for energy hazard evaluation. (author)

  18. INDRA: a program system for calculating the neutronics and photonics characteristics of a fusion reactor blanket

    International Nuclear Information System (INIS)

    Perry, R.T.; Gorenflo, H.; Daenner, W.

    1976-01-01

    INDRA is a program system for calculating the neutronics and photonics characteristics of fusion reactor blankets. It incorporates a total of 19 different codes and 5 large data libraries. 10 of the codes are available from the code distribution organizations. Some of them, however, have been slightly modified in order to permit a convenient transfer of information from one program module to the next. The remaining 9 programs have been prepared by the authors to complete the system with respect to flexibility and to facilitate the handling of the results. (orig./WBU) [de

  19. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  20. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX

    Energy Technology Data Exchange (ETDEWEB)

    Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology

    2010-02-15

    In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)

  1. d'plus: A program to calculate accuracy and bias measures from detection and discrimination data.

    Science.gov (United States)

    Macmillan, N A; Creelman, C D

    1997-01-01

    The program d'plus calculates accuracy (sensitivity) and response-bias parameters using Signal Detection Theory. Choice Theory, and 'nonparametric' models. is is appropriate for data from one-interval, two- and three-interval forced-choice, same different, ABX, and oddity experimental paradigms.

  2. BLOW.MOD2: program for a vessel depressurization calculation with the contribution of structures

    International Nuclear Information System (INIS)

    Doval, A.

    1990-01-01

    The BLOW.MOD2 program developed to calculate pressure vessels' depressurization is presented, considering heat contribution of the structures. The results are opposite to those obtained from other more complex numerical models, being the comparison extremely satisfactory. BLOW.MOD2 is a software of the 'Systems Sub-Branch', INVAP S.E. (Author) [es

  3. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX

    International Nuclear Information System (INIS)

    Alioli, Simone; Nason, Paolo; Oleari, Carlo; Re, Emanuele

    2010-02-01

    In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)

  4. Temperature programmed retention indices : calculation from isothermal data Part 2: Results with nonpolar columns

    NARCIS (Netherlands)

    Curvers, J.M.P.M.; Rijks, J.A.; Cramers, C.A.M.G.; Knauss, K.; Larson, P.

    1985-01-01

    The procedure for calculating linear temperature programmed indices as described in part 1 has been evaluated using five different nonpolar columns, with OV-1 as the stationary phase. For fourty-three different solutes covering five different classes of components, including n-alkanes and

  5. SERKON program for compiling a multigroup library to be used in BETTY calculation

    International Nuclear Information System (INIS)

    Nguyen Phuoc Lan.

    1982-11-01

    A SERKON-type program was written to compile data sets generated by FEDGROUP-3 into a multigroup library for BETTY calculation. A multigroup library was generated from the ENDF/B-IV data file and tested against the TRX-1 and TRX-2 lattices with good results. (author)

  6. LALAGE - a computer program to calculate the TM01 modes of cylindrically symmetrical multicell resonant structures

    International Nuclear Information System (INIS)

    Fernandes, P.

    1982-01-01

    An improvement has been made to the LALA program to compute resonant frequencies and fields for all the modes of the lowest TM 01 band-pass of multicell structures. The results are compared with those calculated by another popular rf cavity code and with experimentally measured quantities. (author)

  7. Computer program TMOC for calculating of pressure transients in fluid filled piping networks

    International Nuclear Information System (INIS)

    Siikonen, T.

    1978-01-01

    The propagation of a pressure wave in fluid filles tubes is significantly affected by the pipe wall motion and vice versa. A computer code TMOC (Transients by the Method of Characteristics) is being developed for the analysis of the coupled fluid and pipe wall transients. Because of the structural feedback, the pressure can be calculated more accurately than in the programs commonly used. (author)

  8. The Weak Link HP-41C hand-held calculator program

    Science.gov (United States)

    Ross A. Phillips; Penn A. Peters; Gary D. Falk

    1982-01-01

    The Weak Link hand-held calculator program (HP-41C) quickly analyzes a system for logging production and costs. The production equations model conventional chain saw, skidder, loader, and tandemaxle truck operations in eastern mountain areas. Production of each function of the logging system may be determined so that the system may be balanced for minimum cost. The...

  9. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  10. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  11. ptchg: A FORTRAN program for point-charge calculations of electric field gradients (EFGs)

    Science.gov (United States)

    Spearing, Dane R.

    1994-05-01

    ptchg, a FORTRAN program, has been developed to calculate electric field gradients (EFG) around an atomic site in crystalline solids using the point-charge direct-lattice summation method. It uses output from the crystal structure generation program Atoms as its input. As an application of ptchg, a point-charge calculation of the EFG quadrupolar parameters around the oxygen site in SiO 2 cristobalite is demonstrated. Although point-charge calculations of electric field gradients generally are limited to ionic compounds, the computed quadrupolar parameters around the oxygen site in SiO 2 cristobalite, a highly covalent material, are in good agreement with the experimentally determined values from nuclear magnetic resonance (NMR) spectroscopy.

  12. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.

  13. A program for calculating group constants on the basis of libraries of evaluated neutron data

    International Nuclear Information System (INIS)

    Sinitsa, V.V.

    1987-01-01

    The GRUKON program is designed for processing libraries of evaluated neutron data into group and fine-group (having some 300 groups) microscopic constants. In structure it is a package of applications programs with three basic components: a monitor, a command language and a library of functional modules. The first operative version of the package was restricted to obtaining mid-group non-block cross-sections from evaluated neutron data libraries in the ENDF/B format. This was then used to process other libraries. In the next two versions, cross-section table conversion modules and self-shielding factor calculation modules, respectively, were added to the functions already in the package. Currently, a fourth version of the GRUKON applications program package, for calculation of sub-group parameters, is under preparation. (author)

  14. Assessment model validity document. NAMMU: A program for calculating groundwater flow and transport through porous media

    International Nuclear Information System (INIS)

    Cliffe, K.A.; Morris, S.T.; Porter, J.D.

    1998-05-01

    NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised

  15. Using Symbolic TI Calculators in Engineering Mathematics: Sample Tasks and Reflections from a Decade of Practice

    Science.gov (United States)

    Beaudin, Michel; Picard, Gilles

    2010-01-01

    Starting in September 1999, new students at ETS were required to own the TI-92 Plus or TI-89 symbolic calculator and since September 2002, the Voyage 200. Looking back at these ten years of working with a computer algebra system on every student's desk, one could ask whether the introduction of this hand-held technology has really forced teachers…

  16. Importance sampling and histogrammic representations of reactivity functions and product distributions in Monte Carlo quasiclassical trajectory calculations

    International Nuclear Information System (INIS)

    Faist, M.B.; Muckerman, J.T.; Schubert, F.E.

    1978-01-01

    The application of importance sampling as a variance reduction technique in Monte Carlo quasiclassical trajectory calculations is discussed. Two measures are proposed which quantify the quality of the importance sampling used, and indicate whether further improvements may be obtained by some other choice of importance sampling function. A general procedure for constructing standardized histogrammic representations of differential functions which integrate to the appropriate integral value obtained from a trajectory calculation is presented. Two criteria for ''optimum'' binning of these histogrammic representations of differential functions are suggested. These are (1) that each bin makes an equal contribution to the integral value, and (2) each bin has the same relative error. Numerical examples illustrating these sampling and binning concepts are provided

  17. Sample problem calculations related to two-phase flow transients in a PWR relief-piping network

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1981-03-01

    Two sample problems related with the fast transients of water/steam flow in the relief line of a PWR pressurizer were calculated with a network-flow analysis computer code STAC (System Transient-Flow Analysis Code). The sample problems were supplied by EPRI and are designed to test computer codes or computational methods to determine whether they have the basic capability to handle the important flow features present in a typical relief line of a PWR pressurizer. It was found necessary to implement into the STAC code a number of additional boundary conditions in order to calculate the sample problems. This includes the dynamics of the fluid interface that is treated as a moving boundary. This report describes the methodologies adopted for handling the newly implemented boundary conditions and the computational results of the two sample problems. In order to demonstrate the accuracies achieved in the STAC code results, analytical solutions are also obtained and used as a basis for comparison

  18. Efficient free energy calculations by combining two complementary tempering sampling methods.

    Science.gov (United States)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  19. Poker-camp: a program for calculating detector responses and phantom organ doses in environmental gamma fields

    International Nuclear Information System (INIS)

    Koblinger, L.

    1981-09-01

    A general description, user's manual and a sample problem are given in this report on the POKER-CAMP adjoint Monte Carlo photon transport program. Gamma fields of different environmental sources which are uniformly or exponentially distributed sources or plane sources in the air, in the soil or in an intermediate layer placed between them are simulated in the code. Calculations can be made on flux, kerma and spectra of photons at any point; and on responses of point-like, cylindrical, or spherical detectors; and on doses absorbed in anthropomorphic phantoms. (author)

  20. Program realization of mathematical model of kinetostatical calculation of flat lever mechanisms

    Directory of Open Access Journals (Sweden)

    M. A. Vasechkin

    2016-01-01

    Full Text Available Global computerization determined the dominant position of the analytical methods for the study of mechanisms. As a result, kinetostatics analysis of mechanisms using software packages is an important part of scientific and practical activities of engineers and designers. Therefore, software implementation of mathematical models kinetostatical calculating mechanisms is of practical interest. The mathematical model obtained in [1]. In the language of Turbo Pascal developed a computer procedure that calculates the forces in kinematic pairs in groups Assur (GA and a balancing force at the primary level. Before use appropriate computational procedures it is necessary to know all external forces and moments acting on the GA and to determine the inertial forces and moments of inertia forces. The process of calculations and constructions of the provisions of the mechanism can be summarized as follows. Organized cycle in which to calculate the position of an initial link of the mechanism. Calculate the position of the remaining links of the mechanism by referring to relevant procedures module DIADA in GA [2,3]. Using the graphics mode of the computer displaying on the display the position of the mechanism. The computed inertial forces and moments of inertia forces. Turning to the corresponding procedures of the module, calculated all the forces in kinematic pairs and the balancing force at the primary level. In each kinematic pair build forces and their direction with the help of simple graphical procedures. The magnitude of these forces and their direction are displayed in a special window with text mode. This work contains listings of the test programs MyTеst, is an example of using computing capabilities of the developed module. As a check on the calculation procedures of module in the program is reproduced an example of calculating the balancing forces according to the method of Zhukovsky (Zhukovsky lever.

  1. Measurement assurance program for FTIR analyses of deuterium oxide samples

    International Nuclear Information System (INIS)

    Johnson, S.R.; Clark, J.P.

    1997-01-01

    Analytical chemistry measurements require an installed criterion based assessment program to identify and control sources of error. This program should also gauge the uncertainty about the data. A self- assessment was performed of long established quality control practices against the characteristics of a comprehensive measurement assurance program. Opportunities for improvement were identified. This paper discusses the efforts to transform quality control practices into a complete measurement assurance program. The resulting program heightened the laboratory's confidence in the data it generated, by providing real-time statistical information to control and determine measurement quality

  2. Magnetic particle movement program to calculate particle paths in flow and magnetic fields

    International Nuclear Information System (INIS)

    Inaba, Toru; Sakazume, Taku; Yamashita, Yoshihiro; Matsuoka, Shinya

    2014-01-01

    We developed an analysis program for predicting the movement of magnetic particles in flow and magnetic fields. This magnetic particle movement simulation was applied to a capturing process in a flow cell and a magnetic separation process in a small vessel of an in-vitro diagnostic system. The distributions of captured magnetic particles on a wall were calculated and compared with experimentally obtained distributions. The calculations involved evaluating not only the drag, pressure gradient, gravity, and magnetic force in a flow field but also the friction force between the particle and the wall, and the calculated particle distributions were in good agreement with the experimental distributions. Friction force was simply modeled as static and kinetic friction forces. The coefficients of friction were determined by comparing the calculated and measured results. This simulation method for solving multiphysics problems is very effective at predicting the movements of magnetic particles and is an excellent tool for studying the design and application of devices. - Highlights: ●We developed magnetic particles movement program in flow and magnetic fields. ●Friction force on wall is simply modeled as static and kinetic friction force. ●This program was applied for capturing and separation of an in-vitro diagnostic system. ●Predicted particle distributions on wall were agreed with experimental ones. ●This method is very effective at predicting movements of magnetic particles

  3. SHIELD 1.0: development of a shielding calculator program in diagnostic radiology

    International Nuclear Information System (INIS)

    Santos, Romulo R.; Real, Jessica V.; Luz, Renata M. da; Friedrich, Barbara Q.; Silva, Ana Maria Marques da

    2013-01-01

    In shielding calculation of radiological facilities, several parameters are required, such as occupancy, use factor, number of patients, source-barrier distance, area type (controlled and uncontrolled), radiation (primary or secondary) and material used in the barrier. The shielding design optimization requires a review of several options about the physical facility design and, mainly, the achievement of the best cost-benefit relationship for the shielding material. To facilitate the development of this kind of design, a program to calculate the shielding in diagnostic radiology was implemented, based on data and limits established by National Council on Radiation Protection and Measurements (NCRP) 147 and SVS-MS 453/98. The program was developed in C⌗ language, and presents a graphical interface for user data input and reporting capabilities. The module initially implemented, called SHIELD 1.0, refers to calculating barriers for conventional X-ray rooms. The program validation was performed by the comparison with the results of examples of shielding calculations presented in NCRP 147.

  4. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  5. ELIPGRID-PC: A PC program for calculating hot spot probabilities

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1994-10-01

    ELIPGRID-PC, a new personal computer program has been developed to provide easy access to Singer's 1972 ELIPGRID algorithm for hot-spot detection probabilities. Three features of the program are the ability to determine: (1) the grid size required for specified conditions, (2) the smallest hot spot that can be sampled with a given probability, and (3) the approximate grid size resulting from specified conditions and sampling cost. ELIPGRID-PC also provides probability of hit versus cost data for graphing with spread-sheets or graphics software. The program has been successfully tested using Singer's published ELIPGRID results. An apparent error in the original ELIPGRID code has been uncovered and an appropriate modification incorporated into the new program

  6. Hyperfine electric parameters calculation in Si samples implanted with {sup 57}Mn→{sup 57}Fe

    Energy Technology Data Exchange (ETDEWEB)

    Abreu, Y., E-mail: yabreu@ceaden.edu.cu [Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Calle 30 No. 502 e/5ta y 7ma Ave., 11300 Miramar, Playa, La Habana (Cuba); Cruz, C.M.; Piñera, I.; Leyva, A.; Cabal, A.E. [Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Calle 30 No. 502 e/5ta y 7ma Ave., 11300 Miramar, Playa, La Habana (Cuba); Van Espen, P. [Departement Chemie, Universiteit Antwerpen, Middelheimcampus, G.V.130, Groenenborgerlaan 171, 2020 Antwerpen (Belgium); Van Remortel, N. [Departement Fysica, Universiteit Antwerpen, Middelheimcampus, G.U.236, Groenenborgerlaan 171, 2020 Antwerpen (Belgium)

    2014-07-15

    Nowadays the electronic structure calculations allow the study of complex systems determining the hyperfine parameters measured at a probe atom, including the presence of crystalline defects. The hyperfine electric parameters have been measured by Mössbauer spectroscopy in silicon materials implanted with {sup 57}Mn→{sup 57}Fe ions, observing four main contributions to the spectra. Nevertheless, some ambiguities still remain in the {sup 57}Fe Mössbauer spectra interpretation in this case, regarding the damage configurations and its evolution with annealing. In the present work several implantation environments are evaluated and the {sup 57}Fe hyperfine parameters are calculated. The observed correlation among the studied local environments and the experimental observations is presented, and a tentative microscopic description of the behavior and thermal evolution of the characteristic defects local environments of the probe atoms concerning the location of vacancies and interstitial Si in the neighborhood of {sup 57}Fe ions in substitutional and interstitial sites is proposed.

  7. Slicken 1.0: Program for calculating the orientation of shear on reactivated faults

    Science.gov (United States)

    Xu, Hong; Xu, Shunshan; Nieto-Samaniego, Ángel F.; Alaniz-Álvarez, Susana A.

    2017-07-01

    The slip vector on a fault is an important parameter in the study of the movement history of a fault and its faulting mechanism. Although there exist many graphical programs to represent the shear stress (or slickenline) orientations on faults, programs to quantitatively calculate the orientation of fault slip based on a given stress field are scarce. In consequence, we develop Slicken 1.0, a software to rapidly calculate the orientation of maximum shear stress on any fault plane. For this direct method of calculating the resolved shear stress on a planar surface, the input data are the unit vector normal to the involved plane, the unit vectors of the three principal stress axes, and the stress ratio. The advantage of this program is that the vertical or horizontal principal stresses are not necessarily required. Due to its nimble design using Java SE 8.0, it runs on most operating systems with the corresponding Java VM. The software program will be practical for geoscience students, geologists and engineers and will help resolve a deficiency in field geology, and structural and engineering geology.

  8. Effective Dose Calculation Program (EDCP) for the usage of NORM-added consumer product.

    Science.gov (United States)

    Yoo, Do Hyeon; Lee, Jaekook; Min, Chul Hee

    2018-04-09

    The aim of this study is to develop the Effective Dose Calculation Program (EDCP) for the usage of Naturally Occurring Radioactive Material (NORM) added consumer products. The EDCP was developed based on a database of effective dose conversion coefficient and the Matrix Laboratory (MATLAB) program to incorporate a Graphic User Interface (GUI) for ease of use. To validate EDCP, the effective dose calculated with EDCP by manually determining the source region by using the GUI and that by using the reference mathematical algorithm were compared for pillow, waist supporter, eye-patch and sleeping mattress. The results show that the annual effective dose calculated with EDCP was almost identical to that calculated using the reference mathematical algorithm in most of the assessment cases. With the assumption of the gamma energy of 1 MeV and activity of 1 MBq, the annual effective doses of pillow, waist supporter, sleeping mattress, and eye-patch determined using the reference algorithm were 3.444 mSv year -1 , 2.770 mSv year -1 , 4.629 mSv year -1 , and 3.567 mSv year -1 , respectively, while those calculated using EDCP were 3.561 mSv year -1 , 2.630 mSv year -1 , 4.740 mSv year -1 , and 3.780 mSv year -1 , respectively. The differences in the annual effective doses were less than 5%, despite the different calculation methods employed. The EDCP can therefore be effectively used for radiation protection management in the context of the usage of NORM-added consumer products. Additionally, EDCP can be used by members of the public through the GUI for various studies in the field of radiation protection, thus facilitating easy access to the program. Copyright © 2018. Published by Elsevier Ltd.

  9. SUBGR: A Program to Generate Subgroup Data for the Subgroup Resonance Self-Shielding Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-06-06

    The Subgroup Data Generation (SUBGR) program generates subgroup data, including levels and weights from the resonance self-shielded cross section table as a function of background cross section. Depending on the nuclide and the energy range, these subgroup data can be generated by (a) narrow resonance approximation, (b) pointwise flux calculations for homogeneous media; and (c) pointwise flux calculations for heterogeneous lattice cells. The latter two options are performed by the AMPX module IRFFACTOR. These subgroup data are to be used in the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronic simulator MPACT, for which the primary resonance self-shielding method is the subgroup method.

  10. SHARDA - a program for sample heat, activity, reactivity and dose analysis

    International Nuclear Information System (INIS)

    Shukla, V.K.; Bajpai, Anil

    1985-01-01

    A computer program SHARDA (Sample Heat, Activity, Reactivity and Dose Analysis) has been developed for safety evaluation of Pile Irradiation Request (PIR) for various nonfissile materials in the research reactor CIRUS. The code can also be used, with minor modifications, for PIR safety evaluations for the research reactor DHRUVA, now being commissioned. Most of the data needed for such analysis like isotopic abundances, their various nuclear cross-sections, gamma radiation and shielding data have been built in the code for all nonfissile naturally occuring elements. The PIR safety evaluations can be readily carried out using this code for any sample in elemental, compound or mixture form irradiated in any location of the reactor. This report describes the calculational model and the input/output details of the code. Some earlier irradiations carried out in CIRUS have been analysed using this code and the results have been compared with available operational measurements. (author)

  11. A program for performing exact quantum dynamics calculations using cylindrical polar coordinates: A nanotube application

    Science.gov (United States)

    Skouteris, Dimitris; Gervasi, Osvaldo; Laganà, Antonio

    2009-03-01

    A program that uses the time-dependent wavepacket method to study the motion of structureless particles in a force field of quasi-cylindrical symmetry is presented here. The program utilises cylindrical polar coordinates to express the wavepacket, which is subsequently propagated using a Chebyshev expansion of the Schrödinger propagator. Time-dependent exit flux as well as energy-dependent S matrix elements can be obtained for all states of the particle (describing its angular momentum component along the nanotube axis and the excitation of the radial degree of freedom in the cylinder). The program has been used to study the motion of an H atom across a carbon nanotube. Program summaryProgram title: CYLWAVE Catalogue identifier: AECL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3673 No. of bytes in distributed program, including test data, etc.: 35 237 Distribution format: tar.gz Programming language: Fortran 77 Computer: RISC workstations Operating system: UNIX RAM: 120 MBytes Classification: 16.7, 16.10 External routines: SUNSOFT performance library (not essential) TFFT2D.F (Temperton Fast Fourier Transform), BESSJ.F (from Numerical Recipes, for the calculation of Bessel functions) (included in the distribution file). Nature of problem: Time evolution of the state of a structureless particle in a quasicylindrical potential. Solution method: Time dependent wavepacket propagation. Running time: 50000 secs. The test run supplied with the distribution takes about 10 minutes to complete.

  12. DITTY - a computer program for calculating population dose integrated over ten thousand years

    International Nuclear Information System (INIS)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.

    1986-03-01

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages

  13. Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming

    National Research Council Canada - National Science Library

    Fu, Michael C; Jin, Xing

    2005-01-01

    .... These results have practical implications for Monte Carlo simulation-based solution approaches to stochastic dynamic programming problems where it is impractical to extract the explicit transition...

  14. Development of HyPEP, A Hydrogen Production Plant Efficiency Calculation Program

    International Nuclear Information System (INIS)

    Lee, Young Jin; Park, Ji Won; Lee, Won Jae; Shin, Young Joon; Kim, Jong Ho; Hong, Sung Deok; Lee, Seung Wook; Hwang, Moon Kyu

    2007-12-01

    Development of HyPEP program for assessing the steady-state hydrogen production efficiency of the nuclear hydrogen production facilities was carried out. The main developmental aims of the HyPEP program are the extensive application of the GUI for enhanced user friendliness and the fast numerical solution scheme. These features are suitable for such calculations as the optimisation calculations. HyPEP was developed with the object-oriented programming techniques. The components of the facility was modelled as objects in a hierarchical structure where the inheritance property of the object oriented program were extensively applied. The Delphi program language which is based on the Object Pascal was used for the HyPEP development. The conservation equations for the thermal hydraulic flow network were setup and the numerical solution scheme was developed and implemented into HyPEP beta version. HyPEP beta version has been developed with working GUI and the numerical solution scheme implementation. Due to the premature end of this project the fully working version of HyPEP was not produced

  15. Radioimmunoassay evaluation and quality control by use of a simple computer program for a low cost desk top calculator

    International Nuclear Information System (INIS)

    Schwarz, S.

    1980-01-01

    A simple computer program for the data processing and quality control of radioimmunoassays is presented. It is written for low cost programmable desk top calculator (Hewlett Packard 97), which can be afforded by smaller laboratories. The untreated counts from the scintillation spectrometer are entered manually; the printout gives the following results: initial data, logit-log transformed calibration points, parameters of goodness of fit and of the position of the standard curve, control and unknown samples dose estimates (mean value from single dose interpolations and scatter of replicates) together with the automatic calculation of within assay variance and, by use of magnetic cards holding the control parameters of all previous assays, between assay variance. (orig.) [de

  16. Efficient Sample Delay Calculation for 2-D and 3-D Ultrasound Imaging.

    Science.gov (United States)

    Ibrahim, Aya; Hager, Pascal A; Bartolini, Andrea; Angiolini, Federico; Arditi, Marcel; Thiran, Jean-Philippe; Benini, Luca; De Micheli, Giovanni

    2017-08-01

    Ultrasound imaging is a reference medical diagnostic technique, thanks to its blend of versatility, effectiveness, and moderate cost. The core computation of all ultrasound imaging methods is based on simple formulae, except for those required to calculate acoustic propagation delays with high precision and throughput. Unfortunately, advanced three-dimensional (3-D) systems require the calculation or storage of billions of such delay values per frame, which is a challenge. In 2-D systems, this requirement can be four orders of magnitude lower, but efficient computation is still crucial in view of low-power implementations that can be battery-operated, enabling usage in numerous additional scenarios. In this paper, we explore two smart designs of the delay generation function. To quantify their hardware cost, we implement them on FPGA and study their footprint and performance. We evaluate how these architectures scale to different ultrasound applications, from a low-power 2-D system to a next-generation 3-D machine. When using numerical approximations, we demonstrate the ability to generate delay values with sufficient throughput to support 10 000-channel 3-D imaging at up to 30 fps while using 63% of a Virtex 7 FPGA, requiring 24 MB of external memory accessed at about 32 GB/s bandwidth. Alternatively, with similar FPGA occupation, we show an exact calculation method that reaches 24 fps on 1225-channel 3-D imaging and does not require external memory at all. Both designs can be scaled to use a negligible amount of resources for 2-D imaging in low-power applications and for ultrafast 2-D imaging at hundreds of frames per second.

  17. Application of the REMIX thermal mixing calculation program for the Loviisa reactor

    International Nuclear Information System (INIS)

    Kokkonen, I.; Tuomisto, H.

    1987-08-01

    The REMIX computer program has been validated to be used in the pressurized thermal shock study of the Loviisa reactor pressure vessel. The program has been verified against the data from the thermal and fluid mixing experiments. These experiments have been carried out in Imatran voima Oy to study thermal mixing of the high-pressure safety injection water in the Loviisa VVER-440 type pressurized water reactor. The verified REMIX-versions were applied to reactor calculations in the probabilistic pressurized thermal shock study of the Loviisa Plant

  18. A calculation program for harvesting and transportation costs of energy wood; Energiapuun korjuun ja kuljetuksen kustannuslaskentaohjelmisto

    Energy Technology Data Exchange (ETDEWEB)

    Kuitto, P.J.

    1996-12-31

    VTT Energy is compiling a large and versatile calculation program for harvesting and transportation costs of energy wood. The work has been designed and will be carried out in cooperation with Metsaeteho and Finntech Ltd. The program has been realised in Windows surroundings using SQLWindows graphical database application development system, using the SQLBase relational database management system. The objective of the research is to intensify and create new possibilities for comparison of the utilization costs and the profitability of integrated energy wood production chains with each other inside the chains

  19. A calculation program for harvesting and transportation costs of energy wood; Energiapuun korjuun ja kuljetuksen kustannuslaskentaohjelmisto

    Energy Technology Data Exchange (ETDEWEB)

    Kuitto, P J

    1997-12-31

    VTT Energy is compiling a large and versatile calculation program for harvesting and transportation costs of energy wood. The work has been designed and will be carried out in cooperation with Metsaeteho and Finntech Ltd. The program has been realised in Windows surroundings using SQLWindows graphical database application development system, using the SQLBase relational database management system. The objective of the research is to intensify and create new possibilities for comparison of the utilization costs and the profitability of integrated energy wood production chains with each other inside the chains

  20. AFG-MONSU. A program for calculating axial heterogeneities in cylindrical pin cells

    International Nuclear Information System (INIS)

    Neltrup, H.; Kirkegaard, P.

    1978-08-01

    The AGF-MONSU program complex is designed to calculate the flux in cylindrical fuel pin cells into which heterogeneities are introduced in a regular array. The theory - integral transport theory combined with Monte Carlo by help of a superposition principle - is described in some detail. Detailed derivation of the superposition principle as well as the formulas used in the DIT (Discrete Integral Transport) method is given in the appendices along with a description of the input structure of the AFG-MONSU program complex. (author)

  1. Respiratory tract dose calculation considering physiological parameters from samples of Brazilian population

    International Nuclear Information System (INIS)

    Reis, A.; Lopes, R.; Lourenco, M.; Cardoso, J.

    2006-01-01

    The Human Respiratory Tract Model proposed by the ICRP Publication 66 accounts for the morphology and physiology of the respiratory tract. The ICRP 66 presents deposition fraction in the respiratory tract regions considering reference values from Caucasian man. However, in order to obtain a more accurate assessment of intake and dose the ICRP recommends the use of specific information when they are available. The application of parameters from Brazilian population in the deposition and in the clearance model shows significant variations in the deposition fractions and in the fraction of inhaled activity transferred to blood. The main objective of this study is to evaluate the influence in dose calculation to each region of the respiratory tract when physiological parameters from Brazilian population are applied in the model. The purpose of the dosimetric model is to evaluate dose to each tissues of respiratory tract that are potentially risk from inhaled radioactive materials. The committed equivalent dose, H.T., is calculated by the product of the total number of transformations of the radionuclide in tissue source S over a period of fifty years after incorporation and of the energy absorbed per unit mass in the target tissue T, for each radiation emitted per transformation in tissue source S. The dosimetric model of Human Respirator y Tract was implemented in the software Excel for Windows (version 2000) and H.T. was determined in two stages. First it was calculated the number of total transformations, US, considering the fractional deposition of activity in each source tissue and then it was calculated the total energy absorbed per unit mass S.E.E., in the target tissue. It was assumed that the radionuclide emits an alpha particle with average energy of 5.15 MeV. The variation in the fractional deposition in the compartments of the respiratory tract in changing the physiological parameters from Caucasian to Brazilian adult man causes variation in the number of

  2. Simple and efficient way of speeding up transmission calculations with k-point sampling

    Directory of Open Access Journals (Sweden)

    Jesper Toft Falkenberg

    2015-07-01

    Full Text Available The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally “cheap” post-processing scheme to interpolate transmission functions over k-points to get smooth well-converged average transmission functions. This is relevant for data obtained using typical “expensive” first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.

  3. Sampling returns for realized variance calculations: tick time or transaction time?

    NARCIS (Netherlands)

    Griffin, J.E.; Oomen, R.C.A.

    2008-01-01

    This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are

  4. Design a computational program to calculate the composition variations of nuclear materials in the reactor operations

    International Nuclear Information System (INIS)

    Mohmmadnia, Meysam; Pazirandeh, Ali; Sedighi, Mostafa; Bahabadi, Mohammad Hassan Jalili; Tayefi, Shima

    2013-01-01

    Highlights: ► The atomic densities of light and heavy materials are calculated. ► The solution is obtained using Runge–Kutta–Fehlberg method. ► The material depletion is calculated for constant flux and constant power condition. - Abstract: The present work investigates an appropriate way to calculate the variations of nuclides composition in the reactor core during operations. Specific Software has been designed for this purpose using C#. The mathematical approach is based on the solution of Bateman differential equations using a Runge–Kutta–Fehlberg method. Material depletion at constant flux and constant power can be calculated with this software. The inputs include reactor power, time step, initial and final times, order of Taylor Series to calculate time dependent flux, time unit, core material composition at initial condition (consists of light and heavy radioactive materials), acceptable error criterion, decay constants library, cross sections database and calculation type (constant flux or constant power). The atomic density of light and heavy fission products during reactor operation is obtained with high accuracy as the program outputs. The results from this method compared with analytical solution show good agreements

  5. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Derivations and Verification of Plans. Volume 1

    Science.gov (United States)

    Johnson, Kenneth L.; White, K, Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.

  6. A computer program (COSTUM) to calculate confidence intervals for in situ stress measurements. V. 1

    International Nuclear Information System (INIS)

    Dzik, E.J.; Walker, J.R.; Martin, C.D.

    1989-03-01

    The state of in situ stress is one of the parameters required both for the design and analysis of underground excavations and for the evaluation of numerical models used to simulate underground conditions. To account for the variability and uncertainty of in situ stress measurements, it is desirable to apply confidence limits to measured stresses. Several measurements of the state of stress along a borehole are often made to estimate the average state of stress at a point. Since stress is a tensor, calculating the mean stress and confidence limits using scalar techniques is inappropriate as well as incorrect. A computer program has been written to calculate and present the mean principle stresses and the confidence limits for the magnitudes and directions of the mean principle stresses. This report describes the computer program, COSTUM

  7. Fast neutron fluence calculations as support for a BWR pressure vessel and internals surveillance program

    International Nuclear Information System (INIS)

    Lucatero, Marco A.; Palacios-Hernandez, Javier C.; Ortiz-Villafuerte, Javier; Xolocostli-Munguia, J. Vicente; Gomez-Torres, Armando M.

    2010-01-01

    Materials surveillance programs are required to detect and prevent degradation of safety-related structures and components of a nuclear power reactor. In this work, following the directions in the Regulatory Guide 1.190, a calculational methodology is implemented as additional support for a reactor pressure vessel and internals surveillance program for a BWR. The choice of the neutronic methods employed was based on the premise of being able of performing all the expected future survey calculations in relatively short times, but without compromising accuracy. First, a geometrical model of a typical BWR was developed, from the core to the primary containment, including jet pumps and all other structures. The methodology uses the Synthesis Method to compute the three-dimensional neutron flux distribution. In the methodology, the code CORE-MASTER-PRESTO is used as the three-dimensional core simulator; SCALE is used to generate the fine-group flux spectra of the components of the model and also used to generate a 47 energy-groups job cross section library, collapsed from the 199-fine-group master library VITAMIN-B6; ORIGEN2 was used to compute the isotopic densities of uranium and plutonium; and, finally, DORT was used to calculate the two-dimensional and one-dimensional neutron flux distributions required to compute the synthesized three-dimensional neutron flux. Then, the calculation of fast neutron fluence was performed using the effective full power time periods through six operational fuel cycles of two BWR Units and until the 13th cycle for Unit 1. The results showed a maximum relative difference between the calculated-by-synthesis fast neutron fluxes and fluences and those measured by Fe, Cu and Ni dosimeters less than 7%. The dosimeters were originally located adjacent to the pressure vessel wall, as part of the surveillance program. Results from the computations of peak fast fluence on pressure vessel wall and specific weld locations on the core shroud are

  8. Model for incorporating fuel swelling and clad shrinkage effects in diffusion theory calculations (LWBR Development Program)

    International Nuclear Information System (INIS)

    Schick, W.C. Jr.; Milani, S.; Duncombe, E.

    1980-03-01

    A model has been devised for incorporating into the thermal feedback procedure of the PDQ few-group diffusion theory computer program the explicit calculation of depletion and temperature dependent fuel-rod shrinkage and swelling at each mesh point. The model determines the effect on reactivity of the change in hydrogen concentration caused by the variation in coolant channel area as the rods contract and expand. The calculation of fuel temperature, and hence of Doppler-broadened cross sections, is improved by correcting the heat transfer coefficient of the fuel-clad gap for the effects of clad creep, fuel densification and swelling, and release of fission-product gases into the gap. An approximate calculation of clad stress is also included in the model

  9. Abstract of programs for nuclear reactor calculation and kinetic equations solution

    International Nuclear Information System (INIS)

    Marakazov, A.A.

    1977-01-01

    The collection includes about 50 annotations of programmes,developed in the Kurchatov Atomic Energy Institute in 1971-1976. The programmes are intended for calculating the neutron flux, for solving systems of multigroup equations in P 3 approximation, for calculating the reactor cell, for analysing the system stability, breeding ratio etc. The programme annotations are compiled according to the following diagram: 1.Programme title. 2.Computer type. 3.Physical problem. 4.Solution method. 5.Calculation limitations. 6.Characteristic computer time. 7.Programme characteristic features. 8.Bound programmes. 9.Programme state. 10.Literature allusions in the programme. 11.Required memory resourses. 12.Programming language. 13.Operation system. 14.Names of authors and place of programme adjusting

  10. Calculator: A Hardware Design, Math and Software Programming Project Base Learning

    Directory of Open Access Journals (Sweden)

    F. Criado

    2015-03-01

    Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.

  11. A computer program for unilateral renal clearance calculation by a modified Oberhausen method

    International Nuclear Information System (INIS)

    Brueggemann, G.

    1980-01-01

    A FORTAN program is presented which, on the basis of data obtained with NUKLEOPAN M, calculates the glomerular filtration rate with sup(99m)Tc-DTPA, the unilateral effective renal plasma flow with 131 I-hippuran, and the parameters for describing the isotope rephrogram (ING) with 131 I-hippuran. The results are calculated fully automatically upon entry of the data, and the results are processed and printed out. The theoretical fundamentals of ING and whole-body clearance calculation are presented as well as the methods available for unilateral clearance calculation, and the FORTAN program is described in detail. The standard values of the method are documented, as well as a comparative gamma camera study of 48 patients in order to determine the accuracy of unilateral imaging with the NUKLEOPAN M instrument, a comparison of unilateral clearances by the Oberhausen and Taplin methods, and a comparison between 7/17' plasma clearance and whole-body clearance. Problems and findings of the method are discussed. (orig./MG) [de

  12. A program for calculating and plotting soft x-ray optical interaction coefficients for molecules

    International Nuclear Information System (INIS)

    Thomas, M.M.; Davis, J.C.; Jacobsen, C.J.; Perera, R.C.C.

    1989-08-01

    Comprehensive tables for atomic scattering factor components, f1 and f2, were compiled by Henke et al. for the extended photon region 50 - 10000 eV. Accurate calculations of optical interaction coefficients for absorption, reflection and scattering by material systems (e.g. filters, multi-layers, etc...), which have widespread application, can be based simply upon the atomic scattering factors for the elements comprising the material, except near the absorption threshold energies. These calculations based upon the weighted sum of f1 and f2 for each atomic species present can be very tedious if done by hand. This led us to develop a user friendly program to perform these calculations on an IBM PC or compatible computer. By entering the chemical formula, density and thickness of up to six molecules, values of the f1, f2, mass absorption transmission efficiencies, attenuation lengths, mirror reflectivities and complex indices of refraction can be calculated and plotted as a function of energy or wavelength. This program will be available distribution. 7 refs., 1 fig

  13. Program for photon shielding calculations. Examination of approximations on irradiation geometries

    International Nuclear Information System (INIS)

    Isozumi, Yasuhito; Ishizuka, Fumihiko; Miyatake, Hideo; Kato, Takahisa; Tosaki, Mitsuo

    2004-01-01

    Penetration factors and related numerical data in 'Manual of Practical Shield Calculation of Radiation Facilities (2000)', which correspond to the irradiation geometries of point isotropic source in infinite thick material (PI), point isotropic source in finite thick material (PF) and vertical incident to finite thick material (VF), have been carefully examined. The shield calculation based on the PI geometry is usually performed with effective dose penetration factors of radioisotopes given in the 'manual'. The present work cleary shows that such a calculation may lead to an overestimate more than twice larger, especially for thick shield of concrete and water. Employing the numerical data in the 'manual', we have fabricated a simple computer program for the estimation of penetration factors and effective doses of radioisotopes in the different irradiation geometries, i.e., PI, PF and VF. The program is also available to calculate the effective dose from a set of radioisotopes in the different positions, which is necessary for the γ-ray shielding of radioisotope facilities. (author)

  14. Method Evaluations for Adsorption Free Energy Calculations at the Solid/Water Interface through Metadynamics, Umbrella Sampling, and Jarzynski's Equality.

    Science.gov (United States)

    Wei, Qichao; Zhao, Weilong; Yang, Yang; Cui, Beiliang; Xu, Zhijun; Yang, Xiaoning

    2018-03-19

    Considerable interest in characterizing protein/peptide-surface interactions has prompted extensive computational studies on calculations of adsorption free energy. However, in many cases, each individual study has focused on the application of free energy calculations to a specific system; therefore, it is difficult to combine the results into a general picture for choosing an appropriate strategy for the system of interest. Herein, three well-established computational algorithms are systemically compared and evaluated to compute the adsorption free energy of small molecules on two representative surfaces. The results clearly demonstrate that the characteristics of studied interfacial systems have crucial effects on the accuracy and efficiency of the adsorption free energy calculations. For the hydrophobic surface, steered molecular dynamics exhibits the highest efficiency, which appears to be a favorable method of choice for enhanced sampling simulations. However, for the charged surface, only the umbrella sampling method has the ability to accurately explore the adsorption free energy surface. The affinity of the water layer to the surface significantly affects the performance of free energy calculation methods, especially at the region close to the surface. Therefore, a general principle of how to discriminate between methodological and sampling issues based on the interfacial characteristics of the system under investigation is proposed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. CDFMC: a program that calculates the fixed neutron source distribution for a BWR using Monte Carlo

    International Nuclear Information System (INIS)

    Gomez T, A.M.; Xolocostli M, J.V.; Palacios H, J.C.

    2006-01-01

    The three-dimensional neutron flux calculation using the synthesis method, it requires of the determination of the neutron flux in two two-dimensional configurations as well as in an unidimensional one. Most of the standard guides for the neutron flux calculation or fluences in the vessel of a nuclear reactor, make special emphasis in the appropriate calculation of the fixed neutron source that should be provided to the used transport code, with the purpose of finding sufficiently approximated flux values. The reactor core assemblies configuration is based on X Y geometry, however the considered problem is solved in R θ geometry for what is necessary to make an appropriate mapping to find the source term associated to the R θ intervals starting from a source distribution in rectangular coordinates. To develop the CDFMC computer program (Source Distribution calculation using Monte Carlo), it was necessary to develop a theory of independent mapping to those that have been in the literature. The method of meshes overlapping here used, is based on a technique of random points generation, commonly well-known as Monte Carlo technique. Although the 'randomness' of this technique it implies considering errors in the calculations, it is well known that when increasing the number of points randomly generated to measure an area or some other quantity of interest, the precision of the method increases. In the particular case of the CDFMC computer program, the developed technique reaches a good general behavior when it is used a considerably high number of points (bigger or equal to a hundred thousand), with what makes sure errors in the calculations of the order of 1%. (Author)

  16. Fast neutron and gamma-ray transmission technique in mixed samples. MCNP calculations

    International Nuclear Information System (INIS)

    Perez, N.; Padron, I.

    2001-01-01

    In this paper the moisture in sand and also the sulfur content in toluene have been described by using the simultaneous fast neutron/gamma transmission technique (FNGT). Monte Carlo calculations show that it is possible to apply this technique with accelerator-based and isotopic neutron sources in the on-line analysis to perform the product quality control, specifically in the building materials industry and the petroleum one. It has been used particles from a 14MeV neutron generator and also from an Am-Be neutron source. The estimation of optimal system parameters like the efficiency, detection time, hazards and costs were performed in order to compare both neutron sources

  17. KOP program for calculating cross sections of neutron and charged particle interactions with atomic nuclei using the optical model

    International Nuclear Information System (INIS)

    Grudzevich, O.D.; Zelenetskij, A.V.; Pashchenko, A.B.

    1986-01-01

    The last version of the KOP program for calculating cross sections of neutron and charged particle interaction with atomic nuclei within the scope of the optical model is described. The structure and program organization, library of total parameters of the optical potential, program identificators and peculiarities of its operation, input of source data and output of calculational results for printing are described in detail. The KOP program is described in Fortran- and adapted for EC-1033 computer

  18. Implementation of a Thermodynamic Solver within a Computer Program for Calculating Fission-Product Release Fractions

    Science.gov (United States)

    Barber, Duncan Henry

    During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A

  19. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  20. SKATE: a docking program that decouples systematic sampling from scoring.

    Science.gov (United States)

    Feng, Jianwen A; Marshall, Garland R

    2010-11-15

    SKATE is a docking prototype that decouples systematic sampling from scoring. This novel approach removes any interdependence between sampling and scoring functions to achieve better sampling and, thus, improves docking accuracy. SKATE systematically samples a ligand's conformational, rotational and translational degrees of freedom, as constrained by a receptor pocket, to find sterically allowed poses. Efficient systematic sampling is achieved by pruning the combinatorial tree using aggregate assembly, discriminant analysis, adaptive sampling, radial sampling, and clustering. Because systematic sampling is decoupled from scoring, the poses generated by SKATE can be ranked by any published, or in-house, scoring function. To test the performance of SKATE, ligands from the Asetex/CDCC set, the Surflex set, and the Vertex set, a total of 266 complexes, were redocked to their respective receptors. The results show that SKATE was able to sample poses within 2 A RMSD of the native structure for 98, 95, and 98% of the cases in the Astex/CDCC, Surflex, and Vertex sets, respectively. Cross-docking accuracy of SKATE was also assessed by docking 10 ligands to thymidine kinase and 73 ligands to cyclin-dependent kinase. 2010 Wiley Periodicals, Inc.

  1. Study on the Application of the Combination of TMD Simulation and Umbrella Sampling in PMF Calculation for Molecular Conformational Transitions

    Directory of Open Access Journals (Sweden)

    Qing Wang

    2016-05-01

    Full Text Available Free energy calculations of the potential of mean force (PMF based on the combination of targeted molecular dynamics (TMD simulations and umbrella samplings as a function of physical coordinates have been applied to explore the detailed pathways and the corresponding free energy profiles for the conformational transition processes of the butane molecule and the 35-residue villin headpiece subdomain (HP35. The accurate PMF profiles for describing the dihedral rotation of butane under both coordinates of dihedral rotation and root mean square deviation (RMSD variation were obtained based on the different umbrella samplings from the same TMD simulations. The initial structures for the umbrella samplings can be conveniently selected from the TMD trajectories. For the application of this computational method in the unfolding process of the HP35 protein, the PMF calculation along with the coordinate of the radius of gyration (Rg presents the gradual increase of free energies by about 1 kcal/mol with the energy fluctuations. The feature of conformational transition for the unfolding process of the HP35 protein shows that the spherical structure extends and the middle α-helix unfolds firstly, followed by the unfolding of other α-helices. The computational method for the PMF calculations based on the combination of TMD simulations and umbrella samplings provided a valuable strategy in investigating detailed conformational transition pathways for other allosteric processes.

  2. MCFT: a program for calculating fast and thermal neutron multigroup constants

    International Nuclear Information System (INIS)

    Yang Shunhai; Sang Xinzeng

    1993-01-01

    MCFT is a program for calculating the fast and thermal neutron multigroup constants, which is redesigned from some codes for generation of thermal neutron multigroup constants and for fast neutron multigroup constants adapted on CYBER 825 computer. It uses indifferently as basic input with the evaluated nuclear data contained in the ENDF/B (US), KEDAK (Germany) and UK (United Kingdom) libraries. The code includes a section devoted to the generation of resonant Doppler broadened cross section in the framework of single-or multi-level Breit-Wigner formalism. The program can compute the thermal neutron scattering law S (α, β, T) as the input data in tabular, free gas or diffusion motion form. It can treat up to 200 energy groups and Legendre moments up to P 5 . The output consists of various reaction multigroup constants in all neutron energy range desired in the nuclear reactor design and calculation. Three options in input file can be used by the user. The output format is arbitrary and defined by user with a minimum of program modification. The program includes about 15,000 cards and 184 subroutines. FORTRAN 5 computer language is used. The operation system is under NOS 2 on computer CYBER 825

  3. A new program for calculating matrix elements of one-particle operators in jj-coupling

    International Nuclear Information System (INIS)

    Pyper, N.C.; Grant, I.P.; Beatham, N.

    1978-01-01

    The aim of this paper is to calculate the matrix elements of one-particle tensor operators occurring in atomic and nuclear theory between configuration state functions representing states containing any number of open shells in jj-coupling. The program calculates the angular part of these matrix elements. The program is essentially a new version of RDMEJJ, written by J.J. Chang. The aims of this version are to eliminate inconsistencies from RDMEJJ, to modify its input requirements for consistency with MCP75, and to modify its output so that it can be stored in a discfile for access by other compatible programs. The program assumes that the configurational states are built from a common orthonormal set of basis orbitals. The number of electrons in a shell having j>=9/2 is restricted to be not greater than 2 by the available CFP routines . The present version allows up to 40 orbitals and 50 configurational states with <=10 open shells; these numbers can be changed by recompiling with modified COMMON/DIMENSION statements. The user should ensure that the CPC library subprograms AAGD, ACRI incorporate all current updates and have been converted to use double precision floating point arithmetic. (Auth.)

  4. Northeast Cooperative Research Study Fleet (SF) Program Biological Sampling Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Northeast Cooperative Research Study Fleet (SF) Program partners with a subset of commercial fishermen to collect high quality, high resolution, haul by haul...

  5. EBRPOCO - a program to calculate detailed contributions of power reactivity components of EBR-II

    International Nuclear Information System (INIS)

    Meneghetti, D.; Kucera, D.A.

    1981-01-01

    The EBRPOCO program has been developed to facilitate the calculations of the power coefficients of reactivity of EBR-II loadings. The program enables contributions of various components of the power coefficient to be delineated axially for every subassembly. The program computes the reactivity contributions of the power coefficients resulting from: density reduction of sodium coolant due to temperature; displacement of sodium coolant by thermal expansions of cladding, structural rods, subassembly cans, and lower and upper axial reflectors; density reductions of these steel components due to temperature; displacement of bond-sodium (if present) in gaps by differential thermal expansions of fuel and cladding; density reduction of bond-sodium (if present) in gaps due to temperature; free axial expansion of fuel if unrestricted by cladding or restricted axial expansion of fuel determined by axial expansion of cladding. Isotopic spatial contributions to the Doppler component my also be obtained. (orig.) [de

  6. Program support of the automated system of planned calculations of the Oil and Gas Extracting Administration

    Energy Technology Data Exchange (ETDEWEB)

    Ashkinuze, V G; Reznikovskiy, P T

    1978-01-01

    An examination is made of the program support of the Automated System of Planned Calculations (ASPC) of the oil and Gas Extracting Administration (OGEA). Specific requirements for the ASPC of the OGEA are indicated and features of its program realization. In developing the program support of the system, an approach of parametric programming was used. A formal model of the ASPC OGEA is described in detail. It was formed in a theoretical-multiple language. Sets with structure of a tree are examined. They illustrate the production and administrative hierarchical structure of the planning objects in the oil region. The top of the tree corresponds to the OGEA as a whole. In the simplest realization, the tree has two levels of hierarchy: association and field. In general features, a procedure is described for possible use of the system by the planning workers. A plan is presented for program support of the ASPC OGEA, in light of whose specific nature a large part of the programs which realize this system are written in a language ASSEMBLER.

  7. Design and relevant sample calculations for a neutral particle energy diagnostic based on time of flight

    Energy Technology Data Exchange (ETDEWEB)

    Cecconello, M

    1999-05-01

    Extrap T2 will be equipped with a neutral particles energy diagnostic based on time of flight technique. In this report, the expected neutral fluxes for Extrap T2 are estimated and discussed in order to determine the feasibility and the limits of such diagnostic. These estimates are based on a 1D model of the plasma. The input parameters of such model are the density and temperature radial profiles of electrons and ions and the density of neutrals at the edge and in the centre of the plasma. The atomic processes included in the model are the charge-exchange and the electron-impact ionization processes. The results indicate that the plasma attenuation length varies from a/5 to a, a being the minor radius. Differential neutral fluxes, as well as the estimated power losses due to CX processes (2 % of the input power), are in agreement with experimental results obtained in similar devices. The expected impurity influxes vary from 10{sup 14} to 10{sup 11} cm{sup -2}s{sup -1}. The neutral particles detection and acquisition systems are discussed. The maximum detectable energy varies from 1 to 3 keV depending on the flight distances d. The time resolution is 0.5 ms. Output signals from the waveform recorder are foreseen in the range 0-200 mV. An 8-bit waveform recorder having 2 MHz sampling frequency and 100K sample of memory capacity is the minimum requirement for the acquisition system 20 refs, 19 figs.

  8. Regression dilution bias: tools for correction methods and sample size calculation.

    Science.gov (United States)

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  9. Complex of programs for calculating radiation fields outside plane protecting shields, bombarded by high-energy nucleons

    International Nuclear Information System (INIS)

    Gel'fand, E.K.; Man'ko, B.V.; Serov, A.Ya.; Sychev, B.S.

    1979-01-01

    A complex of programs for modelling various radiation situations at high energy proton accelerators is considered. The programs are divided into there main groups according to their purposes. The first group includes programs for preparing constants describing the processes of different particle interaction with a substanc The second group of programs calculates the complete function of particle distribution arising in shields under irradiation by high energy nucleons. Concrete radiation situations arising at high energy proton accelerators are calculated by means of the programs of the third group. A list of programs as well as their short characteristic are given

  10. Military construction program economic analysis manual: Sample economic analyses: Hazardous Waste Remedial Actions Program

    International Nuclear Information System (INIS)

    1987-12-01

    This manual enables the US Air Force to comprehensively and systematically analyze alternative approaches to meeting its military construction requirements. The manual includes step-by-step procedures for completing economic analyses for military construction projects, beginning with determining if an analysis is necessary. Instructions and a checklist of the tasks involved for each step are provided; and examples of calculations and illustrations of completed forms are included. The manual explains the major tasks of an economic analysis, including identifying the problem, selecting realistic alternatives for solving it, formulating appropriate assumptions, determining the costs and benefits of the alternatives, comparing the alternatives, testing the sensitivity of major uncertainties, and ranking the alternatives. Appendixes are included that contain data, indexes, and worksheets to aid in performing the economic analyses. For reference, Volume 2 contains sample economic analyses that illustrate how each form is filled out and that include a complete example of the documentation required

  11. Retained Gas Sampling Results for the Flammable Gas Program

    International Nuclear Information System (INIS)

    Bates, J.M.; Mahoney, L.A.; Dahl, M.E.; Antoniak, Z.I.

    1999-01-01

    The key phenomena of the Flammable Gas Safety Issue are generation of the gas mixture, the modes of gas retention, and the mechanisms causing release of the gas. An understanding of the mechanisms of these processes is required for final resolution of the safety issue. Central to understanding is gathering information from such sources as historical records, tank sampling data, tank process data (temperatures, ventilation rates, etc.), and laboratory evaluations conducted on tank waste samples

  12. Retained Gas Sampling Results for the Flammable Gas Program

    Energy Technology Data Exchange (ETDEWEB)

    J.M. Bates; L.A. Mahoney; M.E. Dahl; Z.I. Antoniak

    1999-11-18

    The key phenomena of the Flammable Gas Safety Issue are generation of the gas mixture, the modes of gas retention, and the mechanisms causing release of the gas. An understanding of the mechanisms of these processes is required for final resolution of the safety issue. Central to understanding is gathering information from such sources as historical records, tank sampling data, tank process data (temperatures, ventilation rates, etc.), and laboratory evaluations conducted on tank waste samples.

  13. Statistical Sampling Handbook for Student Aid Programs: A Reference for Non-Statisticians. Winter 1984.

    Science.gov (United States)

    Office of Student Financial Assistance (ED), Washington, DC.

    A manual on sampling is presented to assist audit and program reviewers, project officers, managers, and program specialists of the U.S. Office of Student Financial Assistance (OSFA). For each of the following types of samples, definitions and examples are provided, along with information on advantages and disadvantages: simple random sampling,…

  14. Scinfi, a program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1984-01-01

    A code, Scinfi, was developed, written in Basic, to compute the efficiency-quench standardization curve for any radionuclide. The program requires the standardization curve for 3 H and the polynomial relations between counting efficiency and figure of merit for both 3 H and the problem (e.g. 14 C). The program is applied to the computation of the efficiency-quench standardization curve for 14 C. Five different liquid scintillation spectrometers and two scintillator solutions have been checked. The computation results are compared with the experimental values obtained with a set of 14 C standardized samples. (author)

  15. SCINFI, a program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1984-01-01

    A code, SCINFI, was developed, written in BASIC, to compute the efficiency- quench standardization curve for any radionuclide. The program requires the standardization curve for 3H and the polynomial relations between counting efficiency and figure of merit for both 3H and the problem (e.g. 14 C ). The program is applied to the computation of the efficiency-quench standardization curve for 14 c . Five different liquid scintillation spectrometers and two scintillator solutions have bean checked. The computation results are compared with the experimental values obtained with a set of 14 c standardized samples. (Author)

  16. SAMPLE RESULTS FROM THE INTEGRATED SALT DISPOSITION PROGRAM MACROBATCH 5 TANK 21H QUALIFICATION MST, ESS AND PODD SAMPLES

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T.; Fink, S.

    2012-04-24

    Savannah River National Laboratory (SRNL) performed experiments on qualification material for use in the Integrated Salt Disposition Program (ISDP) Batch 5 processing. This qualification material was a composite created from recent samples from Tank 21H and archived samples from Tank 49H to match the projected blend from these two tanks. Additionally, samples of the composite were used in the Actinide Removal Process (ARP) and extraction-scrub-strip (ESS) tests. ARP and ESS test results met expectations. A sample from Tank 21H was also analyzed for the Performance Objectives Demonstration Document (PODD) requirements. SRNL was able to meet all of the requirements, including the desired detection limits for all the PODD analytes. This report details the results of the Actinide Removal Process (ARP), Extraction-Scrub-Strip (ESS) and Performance Objectives Demonstration Document (PODD) samples of Macrobatch (Salt Batch) 5 of the Integrated Salt Disposition Program (ISDP).

  17. Development and application of the PCRELAP5 - Data Calculation Program for RELAP 5 Code

    International Nuclear Information System (INIS)

    Silvestre, Larissa J.B.; Sabundjian, Gaianê

    2017-01-01

    Nuclear accidents in the world led to the establishment of rigorous criteria and requirements for nuclear power plant operations by the international regulatory bodies. By using specific computer programs, simulations of various accidents and transients likely to occur at any nuclear power plant are required for certifying and licensing a nuclear power plant. Some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most widely used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors in Brazil and worldwide. A major difficulty in the simulation by using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. Thus, for those calculations performance and preparation of RELAP5 input data, a friendly mathematical preprocessor was designed. The Visual Basic for Application (VBA) for Microsoft Excel demonstrated to be an effective tool to perform a number of tasks in the development of the program. In order to meet the needs of RELAP5 users, the RELAP5 Calculation Program (Programa de Cálculo do RELAP5 – PCRELAP5) was designed. The components of the code were codified; all entry cards including the optional cards of each one have been programmed. An English version for PCRELAP5 was provided. Furthermore, a friendly design was developed in order to minimize the time of preparation of input data and errors committed by users. The final version of this preprocessor was successfully applied for Safety Injection System (SIS) of Angra-2. (author)

  18. Development and application of the PCRELAP5 - Data Calculation Program for RELAP 5 Code

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre, Larissa J.B.; Sabundjian, Gaianê, E-mail: larissajbs@usp.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    Nuclear accidents in the world led to the establishment of rigorous criteria and requirements for nuclear power plant operations by the international regulatory bodies. By using specific computer programs, simulations of various accidents and transients likely to occur at any nuclear power plant are required for certifying and licensing a nuclear power plant. Some sophisticated computational tools have been used such as the Reactor Excursion and Leak Analysis Program (RELAP5), which is the most widely used code for the thermo-hydraulic analysis of accidents and transients in nuclear reactors in Brazil and worldwide. A major difficulty in the simulation by using RELAP5 code is the amount of information required for the simulation of thermal-hydraulic accidents or transients. Thus, for those calculations performance and preparation of RELAP5 input data, a friendly mathematical preprocessor was designed. The Visual Basic for Application (VBA) for Microsoft Excel demonstrated to be an effective tool to perform a number of tasks in the development of the program. In order to meet the needs of RELAP5 users, the RELAP5 Calculation Program (Programa de Cálculo do RELAP5 – PCRELAP5) was designed. The components of the code were codified; all entry cards including the optional cards of each one have been programmed. An English version for PCRELAP5 was provided. Furthermore, a friendly design was developed in order to minimize the time of preparation of input data and errors committed by users. The final version of this preprocessor was successfully applied for Safety Injection System (SIS) of Angra-2. (author)

  19. GenLocDip: A Generalized Program to Calculate and Visualize Local Electric Dipole Moments.

    Science.gov (United States)

    Groß, Lynn; Herrmann, Carmen

    2016-09-30

    Local dipole moments (i.e., dipole moments of atomic or molecular subsystems) are essential for understanding various phenomena in nanoscience, such as solvent effects on the conductance of single molecules in break junctions or the interaction between the tip and the adsorbate in atomic force microscopy. We introduce GenLocDip, a program for calculating and visualizing local dipole moments of molecular subsystems. GenLocDip currently uses the Atoms-In-Molecules (AIM) partitioning scheme and is interfaced to various AIM programs. This enables postprocessing of a variety of electronic structure output formats including cube and wavefunction files, and, in general, output from any other code capable of writing the electron density on a three-dimensional grid. It uses a modified version of Bader's and Laidig's approach for achieving origin-independence of local dipoles by referring to internal reference points which can (but do not need to be) bond critical points (BCPs). Furthermore, the code allows the export of critical points and local dipole moments into a POVray readable input format. It is particularly designed for fragments of large systems, for which no BCPs have been calculated for computational efficiency reasons, because large interfragment distances prevent their identification, or because a local partitioning scheme different from AIM was used. The program requires only minimal user input and is written in the Fortran90 programming language. To demonstrate the capabilities of the program, examples are given for covalently and non-covalently bound systems, in particular molecular adsorbates. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Calculation of the secondary gamma radiation by the Monte Carlo method at displaced sampling from distributed sources

    International Nuclear Information System (INIS)

    Petrov, Eh.E.; Fadeev, I.A.

    1979-01-01

    A possibility to use displaced sampling from a bulk gamma source in calculating the secondary gamma fields by the Monte Carlo method is discussed. The algorithm proposed is based on the concept of conjugate functions alongside the dispersion minimization technique. For the sake of simplicity a plane source is considered. The algorithm has been put into practice on the M-220 computer. The differential gamma current and flux spectra in 21cm-thick lead have been calculated. The source of secondary gamma-quanta was assumed to be a distributed, constant and isotropic one emitting 4 MeV gamma quanta with the rate of 10 9 quanta/cm 3 xs. The calculations have demonstrated that the last 7 cm of lead are responsible for the whole gamma spectral pattern. The spectra practically coincide with the ones calculated by the ROZ computer code. Thus the algorithm proposed can be offectively used in the calculations of secondary gamma radiation transport and reduces the computation time by 2-4 times

  1. A computer program for calculation of the fuel cycle in pressurized water reactors

    International Nuclear Information System (INIS)

    Solanilla, R.

    1976-01-01

    The purpose of the FUCEFURE program is two-fold: first, it is designed to solve the problem of nuclear fuel cycle cost in one pressurized light water reactor calculation. The code was developed primarily for comparative and sensitivity studies. The program contains simple correlations between exposure and available depletion data used to predict the uranium and plutonium content of the fuel as a function of the fuel initial enrichment. Second, it has been devised to evaluate the nuclear fuel demand associated with an expanding nuclear power system. Evaluation can be carried out at any time and stage in the fuel cycle. The program can calculate the natural uranium and separate work requirements of any final and tails enrichment. It also can determine the nuclear power share of each reactor in the system when a decision has been made about the long-term nuclear power installations to be used and the types of PWR and fast breeder reactor characteristics to be involved in them. (author)

  2. SpekCalc: a program to calculate photon spectra from tungsten anode x-ray tubes

    International Nuclear Information System (INIS)

    Poludniowski, G; Evans, P M; Landry, G; DeBlois, F; Verhaegen, F

    2009-01-01

    A software program, SpekCalc, is presented for the calculation of x-ray spectra from tungsten anode x-ray tubes. SpekCalc was designed primarily for use in a medical physics context, for both research and education purposes, but may also be of interest to those working with x-ray tubes in industry. Noteworthy is the particularly wide range of tube potentials (40-300 kVp) and anode angles (recommended: 6-30 deg.) that can be modelled: the program is therefore potentially of use to those working in superficial/orthovoltage radiotherapy, as well as diagnostic radiology. The utility is free to download and is based on a deterministic model of x-ray spectrum generation (Poludniowski 2007 Med. Phys. 34 2175). Filtration can be applied for seven materials (air, water, Be, Al, Cu, Sn and W). In this note SpekCalc is described and illustrative examples are shown. Predictions are compared to those of a state-of-the-art Monte Carlo code (BEAMnrc) and, where possible, to an alternative, widely-used, spectrum calculation program (IPEM78). (note)

  3. 78 FR 23896 - Notice of Funds Availability: Inviting Applications for the Quality Samples Program

    Science.gov (United States)

    2013-04-23

    ... proposals for the 2014 Quality Samples Program (QSP). The intended effect of this notice is to solicit... Strategy (UES) application Internet Web site. The UES allows applicants to submit a single consolidated and... of the FAS marketing programs, financial assistance programs, and market access programs. The...

  4. ANA - a program for evaluation of gamma spectra from environmental samples

    International Nuclear Information System (INIS)

    Mishev, P.

    1993-01-01

    The program aims at for evaluation of gamma spectra, collected in different multichannel analyzers. It provides file format conversion from most popular file spectra formats. The program includes: spectra visualization; energy and shape calibration; efficiency calibration; automatic peak search; resolving of multiplets and peak calculations, based on program KATOK; isotope library; isotope identification and activity calculations. Three types of efficiency calibrations are possible: spline approximation; two branches logarithmic approximation; and polynomial approximation based on orthonormal polynomials. The suggestions of the International Atomic Energy Agency were taken into account in development of the algorithms. The program allows batch spectra processing appropriate for routine tasks and user controlled evaluations. Calculations of lower detection limits of some user defined isotopes are also possible. The program calculates precisely the statistical uncertainties of the final results. The error sources taken into account are: standard source activity errors, efficiency approximation errors and current measurement errors. (author)

  5. Preliminary Calculation of the Indicators of Sustainable Development for National Radioactive Waste Management Programs

    International Nuclear Information System (INIS)

    Cheong, Jae Hak; Park, Won Jae

    2003-01-01

    As a follow up to the Agenda 21's policy statement for safe management of radioactive waste adopted at Rio Conference held in 1992, the UN invited the IAEA to develop and implement indicators of sustainable development for the management of radioactive waste. The IAEA finalized the indicators in 2002, and is planning to calculate the member states' values of indicators in connection with operation of its Net-Enabled Waste Management Database system. In this paper, the basis for introducing the indicators into the radioactive waste management was analyzed, and calculation methodology and standard assessment procedure were simply depicted. In addition, a series of innate limitations in calculation and comparison of the indicators was analyzed. According to the proposed standard procedure, the indicators for a few major countries including Korea were calculated and compared, by use of each country's radioactive waste management framework and its practices. In addition, a series of measures increasing the values of the indicators was derived so as to enhance the sustainability of domestic radioactive waste management program.

  6. Program TOTELA calculating basic cross sections in intermediate energy region by using systematics

    International Nuclear Information System (INIS)

    Fukahori, Tokio; Niita, Koji

    2000-01-01

    Program TOTELA can calculate neutron- and proton-induced total, elastic scattering and reaction cross sections and angular distribution of elastic scattering in the intermediate energy region from 20 MeV to 3 GeV. The TOTELA adopts the systematics modified from that by Pearlstein to reproduce the experimental data and LA150 evaluation better. The calculated results compared with experimental data and LA150 evaluation are shown in figures. The TOTELA results can reproduce those data almost well. The TOTELA was developed to fill the lack of experimental data of above quantities in the intermediate energy region and to use for production of JENDL High Energy File. In the case that there is no experimental data of above quantities, the optical model parameters can be fitted by using TOTELA results. From this point of view, it is also useful to compare the optical model calculation by using RIPL with TOTELA results, in order to verify the parameter quality. Input data of TOTELA is only atomic and mass numbers of incident particle and target nuclide and input/output file names. The output of TOTELA calculation is in ENDF-6 format used in the intermediate energy nuclear data files. It is easy to modify the main routine by users. Details are written in each subroutine and main routine

  7. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    Science.gov (United States)

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  8. ASYMPT - a program to calculate asymptotics of hyperspherical potential curves and adiabatic potentials

    International Nuclear Information System (INIS)

    Abrashkevich, A.G.; Puzynin, I.V.; Vinitskij, S.I.

    1997-01-01

    A FORTRAN 77 program is presented which calculates asymptotics of potential curves and adiabatic potentials with an accuracy of O(ρ -2 ) in the framework of the hyperspherical adiabatic (HSA) approach. It is shown that matrix elements of the equivalent operator corresponding to the perturbation ρ -2 have a simple form in the basis of the Coulomb parabolic functions in the body-fixed frame and can be easily computed for high values of total orbital momentum and threshold number. The second-order corrections to the adiabatic curves are obtained as the solutions of the corresponding secular equation. The asymptotic potentials obtained can be used for the calculation of the energy levels and radial wave functions of two-electron systems in the adiabatic and coupled-channel approximations of the HSA approach

  9. Calculation and evaluation methodology of the flawed pipe and the compute program development

    International Nuclear Information System (INIS)

    Liu Chang; Qian Hao; Yao Weida; Liang Xingyun

    2013-01-01

    Background: The crack will grow gradually under alternating load for a pressurized pipe, whereas the load is less than the fatigue strength limit. Purpose: Both calculation and evaluation methodology for a flawed pipe that have been detected during in-service inspection is elaborated here base on the Elastic Plastic Fracture Mechanics (EPFM) criteria. Methods: In the compute, the depth and length interaction of a flaw has been considered and a compute program is developed per Visual C++. Results: The fluctuating load of the Reactor Coolant System transients, the initial flaw shape, the initial flaw orientation are all accounted here. Conclusions: The calculation and evaluation methodology here is an important basis for continue working or not. (authors)

  10. Users Handbook for the Argonne Premium Coal Sample Program

    Energy Technology Data Exchange (ETDEWEB)

    Vorres, K.S.

    1993-10-01

    This Users Handbook for the Argonne Premium Coal Samples provides the recipients of those samples with information that will enhance the value of the samples, to permit greater opportunities to compare their work with that of others, and aid in correlations that can improve the value to all users. It is hoped that this document will foster a spirit of cooperation and collaboration such that the field of basic coal chemistry may be a more efficient and rewarding endeavor for all who participate. The different sections are intended to stand alone. For this reason some of the information may be found in several places. The handbook is also intended to be a dynamic document, constantly subject to change through additions and improvements. Please feel free to write to the editor with your comments and suggestions.

  11. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    Energy Technology Data Exchange (ETDEWEB)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2011-07-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  12. MCNPX calculations of dose rate distribution inside samples treated in the research gamma irradiating facility at CTEx

    International Nuclear Information System (INIS)

    Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G.; Silva, Ademir X.

    2011-01-01

    A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)

  13. Depandent samples in empirical estimation of stochastic programming problems

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta; Houda, Michal

    2006-01-01

    Roč. 35, 2/3 (2006), s. 271-279 ISSN 1026-597X R&D Projects: GA ČR GA402/04/1294; GA ČR GD402/03/H057; GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic programming * stability * probability metrics * Wasserstein metric * Kolmogorov metric * simulations Subject RIV: BB - Applied Statistics , Operational Research

  14. DIFMIG - A computer program for calculation of diffusive migration through multi-barrier systems

    International Nuclear Information System (INIS)

    Bo, P.; Carlsen, L.

    1981-11-01

    The FORTRAN IV program DIFMIG calculates one-dimensionally (i.e. column) the diffusive migration of single substances through arbitrary multibarrier systems. Time dependent changes in concentration other than dispersion/diffusion (e.g. slow dissolution of a compound from a repository, radioactive decay, and/or build up of daughter products), and possible time dependent variations in the effective dispersion into account. The diffusion equation is solved by a finite difference implicite method, the resulting trigonal matrix equation being solved by standard methods. (author)

  15. Programs and subroutines for calculating cadmium body burdens based on a one-compartment model

    International Nuclear Information System (INIS)

    Robinson, C.V.; Novak, K.M.

    1980-08-01

    A pair of FORTRAN programs for calculating the body burden of cadmium as a function of age is presented, together with a discussion of the assumptions which serve to specify the underlying, one-compartment model. Account is taken of the contributions to the body burden from food, from ambient air, from smoking, and from occupational inhalation. The output is a set of values for ages from birth to 90 years which is either longitudinal (for a given year of birth) or cross-sectional (for a given calendar year), depending on the choice of input parameters

  16. Validation of photon-heating calculations in irradiation reactor with the experimental AMMON program and the CARMEN device

    International Nuclear Information System (INIS)

    Lemaire, Matthieu

    2015-01-01

    The temperature in the different core structures of Material-Testing Reactors (MTR) is a key physical parameter for MTRs' performance and safety. In nuclear reactors, where neutron and photon flux are sustained by fission chain reactions, neutrons and photons steadily deposit energy in the structures they cross and lead to a temperature rise in these structures. In non-fissile core structures (such as material samples, experimental devices, control rods, fuel claddings, and so on), the main part of nuclear heating is induced by photon interactions. This photon heating must therefore be well calculated as it is a key input parameter for MTR thermal studies, whose purpose is for instance to help determine the proper sizing of cooling power, electrical heaters and insulation gaps in MTR irradiation devices. The Jules Horowitz Reactor (JHR) is the next international MTR under construction in the south of France at CEA Cadarache research center (French Alternative Energies and Atomic Energy Commission). The JHR will be a major research infrastructure for the test of structural material and fuel behavior under irradiation. It will also produce from 25% to 50% of the European demand of medical radioisotopes for diagnostic purposes. High levels of nuclear heating are expected in the JHR core, with an absorbed-dose rate up to 20 watts per hafnium gram at nominal power (100 MW). Compared to a Pressurized-Water Reactor (PWR), the JHR is made of a specific array of materials (aluminum rack, beryllium reflector, hafnium control rods) and the feedback on photon-heating calculations with these features is limited. It is therefore necessary to validate photon-heating calculation tools (calculation codes and the European nuclear-data JEFF3.1.1 library) for use in the JHR, that is, it is necessary to determine the biases and uncertainties that are relevant for the photon-heating values calculated with these tools in the JHR. This topic constitutes the core of the present

  17. Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge

    Science.gov (United States)

    Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.

    2016-01-01

    Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749

  18. Release Storage and Disposal Program Product Sampling Support

    International Nuclear Information System (INIS)

    CALMUS, R.B.

    2000-01-01

    This document includes recommended capabilities and/or services to support transport, analysis, and disposition of Immobilized High-Level and Low-Activity Waste samples as requested by the US DOE-Office of River Protection (DOE-ORP) as specified in the Privatization Contract between DOE-ORP and BNFL Inc. In addition, an approved implementation path forward is presented which includes use of existing Hanford Site services to provide the required support capabilities

  19. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    Science.gov (United States)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  20. Self-consistent RPA calculations with Skyrme-type interactions: The skyrme_rpa program

    Science.gov (United States)

    Colò, Gianluca; Cao, Ligang; Van Giai, Nguyen; Capelli, Luigi

    2013-01-01

    Random Phase Approximation (RPA) calculations are nowadays an indispensable tool in nuclear physics studies. We present here a complete version implemented with Skyrme-type interactions, with the spherical symmetry assumption, that can be used in cases where the effects of pairing correlations and of deformation can be ignored. The full self-consistency between the Hartree-Fock mean field and the RPA excitations is enforced, and it is numerically controlled by comparison with energy-weighted sum rules. The main limitations are that charge-exchange excitations and transitions involving spin operators are not included in this version. Program summaryProgram title: skyrme_rpa (v 1.00) Catalogue identifier: AENF_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5531 No. of bytes in distributed program, including test data, etc.: 39435 Distribution format: tar.gz Programming language: FORTRAN-90/95; easily downgradable to FORTRAN-77. Computer: PC with Intel Celeron, Intel Pentium, AMD Athlon and Intel Core Duo processors. Operating system: Linux, Windows. RAM: From 4 MBytes to 150 MBytes, depending on the size of the nucleus and of the model space for RPA. Word size: The code is written with a prevalent use of double precision or REAL(8) variables; this assures 15 significant digits. Classification: 17.24. Nature of problem: Systematic observations of excitation properties in finite nuclear systems can lead to improved knowledge of the nuclear matter equation of state as well as a better understanding of the effective interaction in the medium. This is the case of the nuclear giant resonances and low-lying collective excitations, which can be described as small amplitude collective motions in the framework of

  1. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines

    Science.gov (United States)

    Plummer, Niel; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.

    1988-01-01

    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  2. A program system for ab initio MO calculations on vector and parallel processing machines. Pt. 1

    International Nuclear Information System (INIS)

    Ernenwein, R.; Rohmer, M.M.; Benard, M.

    1990-01-01

    We present a program system for ab initio molecular orbital calculations on vector and parallel computers. The present article is devoted to the computation of one- and two-electron integrals over contracted Gaussian basis sets involving s-, p-, d- and f-type functions. The McMurchie and Davidson (MMD) algorithm has been implemented and parallelized by distributing over a limited number of logical tasks the calculation of the 55 relevant classes of integrals. All sections of the MMD algorithm have been efficiently vectorized, leading to a scalar/vector ratio of 5.8. Different algorithms are proposed and compared for an optimal vectorization of the contraction of the 'intermediate integrals' generated by the MMD formalism. Advantage is taken of the dynamic storage allocation for tuning the length of the vector loops (i.e. the size of the vectorization buffer) as a function of (i) the total memory available for the job, (ii) the number of logical tasks defined by the user (≤13), and (iii) the storage requested by each specific class of integrals. Test calculations carried out on a CRAY-2 computer show that the average number of finite integrals computed over a (s, p, d, f) CGTO basis set is about 1180000 per second and per processor. The combination of vectorization and parallelism on this 4-processor machine reduces the CPU time by a factor larger than 20 with respect to the scalar and sequential performance. (orig.)

  3. Effectiveness of a Clinical Skills Workshop for drug-dosage calculation in a nursing program.

    Science.gov (United States)

    Grugnetti, Anna Maria; Bagnasco, Annamaria; Rosa, Francesca; Sasso, Loredana

    2014-04-01

    Mathematical and calculation skills are widely acknowledged as being key nursing competences if patients are to receive care that is both effective and safe. Indeed, weaknesses in mathematical competence may lead to the administration of miscalculated drug doses, which in turn may harm or endanger patients' lives. However, little attention has been given to identifying appropriate teaching and learning strategies that will effectively facilitate the development of these skills in nurses. One such approach may be simulation. To evaluate the effectiveness of a Clinical Skills Workshop on drug administration that focused on improving the drug-dosage calculation skills of second-year nursing students, with a view to promoting safety in drugs administration. A descriptive pre-post test design. Educational. Simulation center. The sample population included 77 nursing students from a Northern Italian University who attended a 30-hour Clinical Skills Workshop over a period of two weeks. The workshop covered integrated teaching strategies and innovative drug-calculation methodologies which have been described to improve psychomotor skills and build cognitive abilities through a greater understanding of mathematics linked to clinical practice. Study results showed a significant improvement between the pre- and the post-test phases, after the intervention. Pre-test scores ranged between 0 and 25 out of a maximum of 30 points, with a mean score of 15.96 (SD 4.85), and a median score of 17. Post-test scores ranged between 15 and 30 out of 30, with a mean score of 25.2 (SD 3.63) and a median score of 26 (pstudy shows that Clinical Skills Workshops may be tailored to include teaching techniques that encourage the development of drug-dosage calculation skills, and that training strategies implemented during a Clinical skills Workshop can enhance students' comprehension of mathematical calculations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Vectorization and parallelization of Monte-Carlo programs for calculation of radiation transport

    International Nuclear Information System (INIS)

    Seidel, R.

    1995-01-01

    The versatile MCNP-3B Monte-Carlo code written in FORTRAN77, for simulation of the radiation transport of neutral particles, has been subjected to vectorization and parallelization of essential parts, without touching its versatility. Vectorization is not dependent on a specific computer. Several sample tasks have been selected in order to test the vectorized MCNP-3B code in comparison to the scalar MNCP-3B code. The samples are a representative example of the 3-D calculations to be performed for simulation of radiation transport in neutron and reactor physics. (1) 4πneutron detector. (2) High-energy calorimeter. (3) PROTEUS benchmark (conversion rates and neutron multiplication factors for the HCLWR (High Conversion Light Water Reactor)). (orig./HP) [de

  5. ParShield: A computer program for calculating attenuation parameters of the gamma rays and the fast neutrons

    International Nuclear Information System (INIS)

    Elmahroug, Y.; Tellili, B.; Souga, C.; Manai, K.

    2015-01-01

    Highlights: • Description of the theoretical method used by the ParShield program. • Description of the ParShield program. • Test and validation the ParShield program. - Abstract: This study aims to present a new computer program called ParShield which determines the neutron and gamma-ray shielding parameters. This program can calculate the total mass attenuation coefficients (μ t ), the effective atomic numbers (Z eff ) and the effective electron densities (N eff ) for gamma rays and it can also calculate the effective removal cross-sections (Σ R ) for fast neutrons for mixtures and compounds. The results obtained for the gamma rays by using ParShield were compared with the results calculated by the WinXcom program and the measured results. The obtained values of (Σ R ) were tested by comparing them with the measured results,the manually calculated results and with the results obtained by using MERCSFN program and an excellent agreement was found between them. The ParShield program can be used as a fast and effective tool to choose and compare the shielding materials, especially for the determination of (Z eff ) and (N eff ), there is no other programs in the literature which can calculate

  6. PERL-2 and LAVR-2 programs for Monte Carlo calculation of reactivity disturbances with trajectory correlation using random numbers

    International Nuclear Information System (INIS)

    Kamaeva, O.B.; Polevoj, V.B.

    1983-01-01

    Realization of BESM-6 computer of a technique is described for calculating a wide class of reactivity disturbances by plotting trajectories in undisturbed and disturbed systems using one sequence of random numbers. The technique was realized on the base of earlier created programs of calculation of widespreed (PERL) and local (LAVR) reactivity disturbances. The efficiency of the technique and programs is demonstrated by calculation of change of effective neutron-multiplication factor when absorber is substituted for fuel element in a BFS-40 critical assembly and by calculation of control drum characteristics

  7. Hanford Environmental Monitoring Program schedule for samples, analyses, and measurements for calendar year 1985

    International Nuclear Information System (INIS)

    Blumer, P.J.; Price, K.R.; Eddy, P.A.; Carlile, J.M.V.

    1984-12-01

    This report provides the CY 1985 schedule of data collection for the routine Hanford Surface Environmental Monitoring and Ground-Water Monitoring Programs at the Hanford Site. The purpose is to evaluate and report the levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5484.1. The routine sampling schedule provided herein does not include samples scheduled to be collected during FY 1985 in support of special studies, special contractor support programs, or for quality control purposes. In addition, the routine program outlined in this schedule is subject to modification during the year in response to changes in site operations, program requirements, or unusual sample results

  8. Influence of sampling frequency and load calculation methods on quantification of annual river nutrient and suspended solids loads.

    Science.gov (United States)

    Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey

    2018-01-11

    Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise

  9. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  10. Development of selective photoionization spectroscopy technology - Development of a computer program to calculate selective ionization of atoms with multistep processes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Soon; Nam, Baek Il [Myongji University, Seoul (Korea, Republic of)

    1995-08-01

    We have developed computer programs to calculate 2-and 3-step selective resonant multiphoton ionization of atoms. Autoionization resonances in the final continuum can be put into account via B-Spline basis set method. 8 refs., 5 figs. (author)

  11. MOST-7 program for calculation of nonstationary operation modes of the nuclear steam generating plant with WWER

    International Nuclear Information System (INIS)

    Mysenkov, A.I.

    1979-01-01

    The MOST-7 program intended for calculating nonstationary emergency models of a nuclear steam generating plant (NSGP) with a WWER reactor is considered in detail. The program consists of the main MOST-7 subprogram, two main subprograms and 98 subprograms-functions. The MOST-7 program is written in the FORTRAN language and realized at the BESM-6 computer. Program storage capacity in the BESM-6 amounts to 73400 words. Primary information input into the program is carried out by means of information input operator from punched cards and DATA operator. Parameter lists, introduced both from punched cards and by means of DATA operator are tabulated. The procedure of calculational result output into printing and plotting devices is considered. Given is an example of calculating the nonstationary process, related to the loss of power in six main circulating pumps for NSGP with the WWER-440 reactor

  12. A program for calculating load coefficient matrices utilizing the force summation method, L218 (LOADS). Volume 1: Engineering and usage

    Science.gov (United States)

    Miller, R. D.; Anderson, L. R.

    1979-01-01

    The LOADS program L218, a digital computer program that calculates dynamic load coefficient matrices utilizing the force summation method, is described. The load equations are derived for a flight vehicle in straight and level flight and excited by gusts and/or control motions. In addition, sensor equations are calculated for use with an active control system. The load coefficient matrices are calculated for the following types of loads: translational and rotational accelerations, velocities, and displacements; panel aerodynamic forces; net panel forces; shears and moments. Program usage and a brief description of the analysis used are presented. A description of the design and structure of the program to aid those who will maintain and/or modify the program in the future is included.

  13. Guide for licensing evaluations using CRAC2: A computer program for calculating reactor accident consequences

    International Nuclear Information System (INIS)

    White, J.E.; Roussin, R.W.; Gilpin, H.

    1988-12-01

    A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports - ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs

  14. Dose calculation algorithm for the Department of Energy Laboratory Accreditation Program

    International Nuclear Information System (INIS)

    Moscovitch, M.; Tawil, R.A.; Thompson, D.; Rhea, T.A.

    1991-01-01

    The dose calculation algorithm for a symmetric four-element LiF:Mg,Ti based thermoluminescent dosimeter is presented. The algorithm is based on the parameterization of the response of the dosimeter when exposed to both pure and mixed fields of various types and compositions. The experimental results were then used to develop the algorithm as a series of empirical response functions. Experiments to determine the response of the dosimeter and to test the dose calculation algorithm were performed according to the standard established by the Department of Energy Laboratory Accreditation Program (DOELAP). The test radiation fields include: 137 Cs gamma rays, 90 Sr/ 90 Y and 204 Tl beta particles, low energy photons of 20-120 keV and moderated 252 Cf neutron fields. The accuracy of the system has been demonstrated in an official DOELAP blind test conducted at Sandia National Laboratory. The test results were well within DOELAP tolerance limits. The results of this test are presented and discussed

  15. Hauser-Feshbach cross-section calculations for elastic and inelastic scattering of alpha particles-program CORA

    International Nuclear Information System (INIS)

    Hartman, A.; Siemaszko, M.; Zipper, W.

    1975-01-01

    The program CORA was prepared on the basis of Hauser and Feshbach compound reaction formalism. It allows the differential cross-section distributions for the elastic and inelastic scattering of alpha particles (via compound nucleus state) to be calculated. The transmission coefficients are calculated on the basis of a four parameter optical model. The search procedure is also included. (author)

  16. Accounting for Uncertainty and Time Lags in Equivalency Calculations for Offsetting in Aquatic Resources Management Programs

    Science.gov (United States)

    Bradford, Michael J.

    2017-10-01

    Biodiversity offset programs attempt to minimize unavoidable environmental impacts of anthropogenic activities by requiring offsetting measures in sufficient quantity to counterbalance losses due to the activity. Multipliers, or offsetting ratios, have been used to increase the amount of offsets to account for uncertainty but those ratios have generally been derived from theoretical or ad-hoc considerations. I analyzed uncertainty in the offsetting process in the context of offsetting for impacts to freshwater fisheries productivity. For aquatic habitats I demonstrate that an empirical risk-based approach for evaluating prediction uncertainty is feasible, and if data are available appropriate adjustments to offset requirements can be estimated. For two data-rich examples I estimate multipliers in the range of 1.5:1 - 2.5:1 are sufficient to account for the uncertainty in the prediction of gains and losses. For aquatic habitats adjustments for time delays in the delivery of offset benefits can also be calculated and are likely smaller than those for prediction uncertainty. However, the success of a biodiversity offsetting program will also depend on the management of the other components of risk not addressed by these adjustments.

  17. Accounting for Uncertainty and Time Lags in Equivalency Calculations for Offsetting in Aquatic Resources Management Programs.

    Science.gov (United States)

    Bradford, Michael J

    2017-10-01

    Biodiversity offset programs attempt to minimize unavoidable environmental impacts of anthropogenic activities by requiring offsetting measures in sufficient quantity to counterbalance losses due to the activity. Multipliers, or offsetting ratios, have been used to increase the amount of offsets to account for uncertainty but those ratios have generally been derived from theoretical or ad-hoc considerations. I analyzed uncertainty in the offsetting process in the context of offsetting for impacts to freshwater fisheries productivity. For aquatic habitats I demonstrate that an empirical risk-based approach for evaluating prediction uncertainty is feasible, and if data are available appropriate adjustments to offset requirements can be estimated. For two data-rich examples I estimate multipliers in the range of 1.5:1 - 2.5:1 are sufficient to account for the uncertainty in the prediction of gains and losses. For aquatic habitats adjustments for time delays in the delivery of offset benefits can also be calculated and are likely smaller than those for prediction uncertainty. However, the success of a biodiversity offsetting program will also depend on the management of the other components of risk not addressed by these adjustments.

  18. Calculation of gamma-rays and fast neutrons fluxes with the program Mercure-4

    International Nuclear Information System (INIS)

    Baur, A.; Dupont, C.; Totth, B.

    1978-01-01

    The program MERCURE-4 evaluates gamma ray or fast neutron attenuation, through laminated or bulky three-dimensionnal shields. The method used is that of line of sight point attenuation kernel, the scattered rays being taken into account by means of build-up factors for γ and removal cross sections for fast neutrons. The integration of the point kernel over the range of sources distributed in space and energy, is performed by the Monte-Carlo method, with an automatic adjustment of the importance functions. Since it is operationnal the program MERCURE-4 has been intensively used for many various problems, for example: - the calculation of gamma heating in reactor cores, control rods and shielding screens, as well as in experimental devices and irradiation loops; - the evaluation of fast neutron fluxes and corresponding damage in structural materials of reactors (vessel steels...); - the estimation of gamma dose rates on nuclear instrumentation in the reactors, around the reactor circuits and around spent fuel shipping casks

  19. Interactive software tool to comprehend the calculation of optimal sequence alignments with dynamic programming.

    Science.gov (United States)

    Ibarra, Ignacio L; Melo, Francisco

    2010-07-01

    Dynamic programming (DP) is a general optimization strategy that is successfully used across various disciplines of science. In bioinformatics, it is widely applied in calculating the optimal alignment between pairs of protein or DNA sequences. These alignments form the basis of new, verifiable biological hypothesis. Despite its importance, there are no interactive tools available for training and education on understanding the DP algorithm. Here, we introduce an interactive computer application with a graphical interface, for the purpose of educating students about DP. The program displays the DP scoring matrix and the resulting optimal alignment(s), while allowing the user to modify key parameters such as the values in the similarity matrix, the sequence alignment algorithm version and the gap opening/extension penalties. We hope that this software will be useful to teachers and students of bioinformatics courses, as well as researchers who implement the DP algorithm for diverse applications. The software is freely available at: http:/melolab.org/sat. The software is written in the Java computer language, thus it runs on all major platforms and operating systems including Windows, Mac OS X and LINUX. All inquiries or comments about this software should be directed to Francisco Melo at fmelo@bio.puc.cl.

  20. ACRO - a computer program for calculating organ doses from acute or chronic inhalation and ingestion of radionuclides

    International Nuclear Information System (INIS)

    Hirayama, Akio; Kishimoto, Yoichiro; Shinohara, Kunihiko.

    1978-01-01

    The computer program ACRO has been developed to calculate organ doses from acute or chronic inhalation and ingestion of radionuclides. The ICRP Task Group Lung Model (TGLM) was used for inhalation model, and a simple one-compartment model for ingestion. This program is written in FORTRAN IV, and can be executed with storage requirements of about 260 K bytes. (auth.)

  1. Programs OPTMAN and SHEMMAN Version 6 (1999) - Coupled-Channels optical model and collective nuclear structure calculation -

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jong Hwa; Lee, Jeong Yeon; Lee, Young Ouk; Sukhovitski, Efrem Sh [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-01-01

    Programs SHEMMAN and OPTMAN (Version 6) have been developed for determinations of nuclear Hamiltonian parameters and for optical model calculations, respectively. The optical model calculations by OPTMAN with coupling schemes built on wave functions functions of non-axial soft-rotator are self-consistent, since the parameters of the nuclear Hamiltonian are determined by adjusting the energies of collective levels to experimental values with SHEMMAN prior to the optical model calculation. The programs have been installed at Nuclear Data Evaluation Laboratory of KAERI. This report is intended as a brief manual of these codes. 43 refs., 9 figs., 1 tabs. (Author)

  2. Test calculations of physical parameters of the TRX,BETTIS and MIT critical assemblies according to the TRIFON program

    International Nuclear Information System (INIS)

    Kochurov, B.P.

    1980-01-01

    Results of calculations of physical parameters characterizing the TRX, MIT and BETTIS critical assemblies obtained according to the program TRIFON are presented. The program TRIFON permits to calculate the space-energy neutron distribution in the multigroup approximation in a multizone cylindrical cell. Results of comparison of the TRX, BETTIS and MIT crytical assembly parameters with experimental data and calculational results according to the Monte Carlo method are presented as well. Deviations of the parameters are in the range of 1.5-2 of experimental errors. Data on the interference of uranium 238 levels in the resonant neutron absorption in the cell are given [ru

  3. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  4. How Many Conformations of Enzymes Should Be Sampled for DFT/MM Calculations? A Case Study of Fluoroacetate Dehalogenase

    Directory of Open Access Journals (Sweden)

    Yanwei Li

    2016-08-01

    Full Text Available The quantum mechanics/molecular mechanics (QM/MM method (e.g., density functional theory (DFT/MM is important in elucidating enzymatic mechanisms. It is indispensable to study “multiple” conformations of enzymes to get unbiased energetic and structural results. One challenging problem, however, is to determine the minimum number of conformations for DFT/MM calculations. Here, we propose two convergence criteria, namely the Boltzmann-weighted average barrier and the disproportionate effect, to tentatively address this issue. The criteria were tested by defluorination reaction catalyzed by fluoroacetate dehalogenase. The results suggest that at least 20 conformations of enzymatic residues are required for convergence using DFT/MM calculations. We also tested the correlation of energy barriers between small QM regions and big QM regions. A roughly positive correlation was found. This kind of correlation has not been reported in the literature. The correlation inspires us to propose a protocol for more efficient sampling. This saves 50% of the computational cost in our current case.

  5. Estimation of Finite Population Mean in Multivariate Stratified Sampling under Cost Function Using Goal Programming

    Directory of Open Access Journals (Sweden)

    Atta Ullah

    2014-01-01

    Full Text Available In practical utilization of stratified random sampling scheme, the investigator meets a problem to select a sample that maximizes the precision of a finite population mean under cost constraint. An allocation of sample size becomes complicated when more than one characteristic is observed from each selected unit in a sample. In many real life situations, a linear cost function of a sample size nh is not a good approximation to actual cost of sample survey when traveling cost between selected units in a stratum is significant. In this paper, sample allocation problem in multivariate stratified random sampling with proposed cost function is formulated in integer nonlinear multiobjective mathematical programming. A solution procedure is proposed using extended lexicographic goal programming approach. A numerical example is presented to illustrate the computational details and to compare the efficiency of proposed compromise allocation.

  6. Package of programs for calculating accidents involving melting of the materials in a fast-reactor vessel

    International Nuclear Information System (INIS)

    Vlasichev, G.N.

    1994-01-01

    Methods for calculating one-dimensional nonstationary temperature distribution in a system of physically coupled materials are described. Six computer programs developed for calculating accident processes for fast reactor core melt are described in the article. The methods and computer programs take into account melting, solidification, and, in some cases, vaporization of materials. The programs perform calculations for heterogeneous systems consisting of materials with arbitrary but constant composition and heat transfer conditions at material boundaries. Additional modules provide calculations of specific conditions of heat transfer between materials, the change in these conditions and configuration of the materials as a result of coolant boiling, melting and movement of the fuel and structural materials, temperature dependences of thermophysical properties of the materials, and heat release in the fuel. 11 refs., 3 figs

  7. TRIGLAV-W a Windows computer program package with graphical users interface for TRIGA reactor core management calculations

    International Nuclear Information System (INIS)

    Zagar, T.; Zefran, B.; Slavic, S.; Snoj, L.; Ravnik, M.

    2006-01-01

    TRIGLAV-W is a program package for reactor calculations of TRIGA Mark II research reactor cores. This program package runs under Microsoft Windows operating system and has new friendly graphical user interface (GUI). The main part of the package is the TRIGLAV code based on two dimensional diffusion approximation for flux distribution calculation. The new GUI helps the user to prepare the input files, runs the main code and displays the output files. TRIGLAV-W has a user friendly GUI also for the visualisation of the calculation results. Calculation results can be visualised using 2D and 3D coloured graphs for easy presentations and analysis. In the paper the many options of the new GUI are presented along with the results of extensive testing of the program. The results of the TRIGLAV-W program package were compared with the results of WIMS-D and MCNP code for calculations of TRIGA benchmark. TRIGLAV-W program was also tested using several libraries developed under IAEA WIMS-D Library Update Project. Additional literature and application form for TRIGLAV-W program package beta testing can be found at http://www.rcp.ijs.si/triglav/. (author)

  8. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N. [Sandia National Labs., Albuquerque, NM (United States). Mechanical and Thermal Environments Dept.

    1998-03-01

    LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

  9. Use of methods for specifying the target difference in randomised controlled trial sample size calculations: Two surveys of trialists' practice.

    Science.gov (United States)

    Cook, Jonathan A; Hislop, Jennifer M; Altman, Doug G; Briggs, Andrew H; Fayers, Peter M; Norrie, John D; Ramsay, Craig R; Harvey, Ian M; Vale, Luke D

    2014-06-01

    the most recent trial, the target difference was usually one viewed as important by a stakeholder group, mostly also viewed as a realistic difference given the interventions under evaluation, and sometimes one that led to an achievable sample size. The response rates achieved were relatively low despite the surveys being short, well presented, and having utilised reminders. Substantial variations in practice exist with awareness, use, and willingness to recommend methods varying substantially. The findings support the view that sample size calculation is a more complex process than would appear to be the case from trial reports and protocols. Guidance on approaches for sample size estimation may increase both awareness and use of appropriate formal methods. © The Author(s), 2014.

  10. A program system for ab initio MO calculations on vector and parallel processing machines. Pt. 3

    International Nuclear Information System (INIS)

    Wiest, R.; Demuynck, J.; Benard, M.; Rohmer, M.M.; Ernenwein, R.

    1991-01-01

    This series of three papers presents a program system for ab initio molecular orbital calculations on vector and parallel computers. Part III is devoted to the four-index transformation on a molecular orbital basis of size NMO of the file of two-electorn integrals (pqparallelrs) generated by a contracted Gaussian set of size NATO (number of atomic orbitals). A fast Yoshimine algorithm first sorts the (pqparallelrs) integrals with respect to index pq only. This file of half-sorted integrals labelled by their rs-index can be processed without further modification to generate either the transformed integrals or the supermatrix elements. The large memory available on the CRAY-2 hase made possible to implement the transformation algorithm proposed by Bender in 1972, which requires a core-storage allocation varying as (NATO) 3 . Two versions of Bender's algorithm are included in the present program. The first version is an in-core version, where the complete file of accumulated contributions to transformed integrals in stored and updated in central memory. This version has been parallelized by distributing over a limited number of logical tasks the NATO steps corresponding to the scanning of the most external loop. The second version is an out-of-core version, in which twin files are alternatively used as input and output for the accumulated contributions to transformed integrals. This version is not parallel. The choice of one or another version and (for version 1) the determination of the number of tasks depends upon the balance between the available and the requested amounts of storage. The storage management and the choice of the proper version are carried out automatically using dynamic storage allocation. Both versions are vectorized and take advantage of the molecular symmetry. (orig.)

  11. Software documentation and user's manual for fish-impingement sampling design and estimation method computer programs

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-11-01

    This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used

  12. Master schedule for CY-1984 Hanford environmental surveillance routine sampling program

    International Nuclear Information System (INIS)

    Blumer, P.J.; Price, K.R.; Eddy, P.A.; Carlile, J.M.V.

    1983-12-01

    This report provides the current schedule of data collection for the routine Hanford environmental surveillance and ground-water Monitoring Programs at the Hanford Site. The purpose is to evaluate and report the levels of radioactive and nonradioactive pollutants in the Hanford environs. The routine sampling schedule provided herein does not include samples that are planned to be collected during FY-1984 in support of special studies, special contractor support programs, or for quality control purposes

  13. 76 FR 41186 - Salmonella Verification Sampling Program: Response to Comments on New Agency Policies and...

    Science.gov (United States)

    2011-07-13

    ... Service [Docket No. FSIS-2008-0008] Salmonella Verification Sampling Program: Response to Comments on New Agency Policies and Clarification of Timeline for the Salmonella Initiative Program (SIP) AGENCY: Food... Federal Register notice (73 FR 4767- 4774), which described upcoming policy changes in the FSIS Salmonella...

  14. A computer program for accident calculations of a standard pressurized water reactor

    International Nuclear Information System (INIS)

    Keutner, H.

    1979-01-01

    In this computer program the dynamic of a standard pressurized water reactor should be realized by both circulation loops with all important components. All important phenomena are taken into consideration, which appear for calculation of disturbances in order to state a realistic process for some minutes after a disturbance or a desired change of condition. In order to optimize the computer time simplifications are introduced in the statement of a differential-algebraic equalization system such that all important effects are taken into consideration. The model analysis starts from the heat production of the fuel rod via cladding material to the cooling medium water and considers the delay time from the core to the steam generator. Alternations of the cooling medium pressure as well as the different temperatures in the primary loop influence the pressuring system - the pressurizer - which is realized by a water and a steam zone with saturated and superheated steam respectively saturated and undercooled water with injection, heating and blow-down devices. The bilance of the steam generator to the secondary loop realizes the process engineering devices. Thereby the control regulation of the steam pressure and the reactor performance is realized. (orig.) [de

  15. [Development and effectiveness of a drug dosage calculation training program using cognitive loading theory based on smartphone application].

    Science.gov (United States)

    Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon

    2012-10-01

    This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, psmartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.

  16. Integration of auto analysis program of gamma spectrum and software and determination of element content in sample by k-zero method

    International Nuclear Information System (INIS)

    Trinh Quang Vinh; Truong Thi Hong Loan; Mai Van Nhon; Huynh Truc Phuong

    2014-01-01

    Integrating the gamma spectrum auto-analysis program with elemental analysis software by k-zero method is the objective for many researchers. This work is the first stepin building an auto analysis program of gamma spectrum, which includes modules of reading spectrum, displaying spectrum, calibrating energy of peak, smoothing spectrum, calculating peak area and determining content of elements in sample. Then, the results from the measurements of standard samples by a low level spectrometer using HPGe detector are compared to those of other gamma spectrum auto-analysis programs. (author)

  17. EGS-Ray, a program for the visualization of Monte-Carlo calculations in the radiation physics

    International Nuclear Information System (INIS)

    Kleinschmidt, C.

    2001-01-01

    A Windows program is introduced which allows a relatively easy and interactive access to Monte Carlo techniques in clinical radiation physics. Furthermore, this serves as a visualization tool of the methodology and the results of Monte Carlo simulations. The program requires only little effort to formulate and calculate a Monte Carlo problem. The Monte Carlo module of the program is based on the well-known EGS4/PRESTA code. The didactic features of the program are presented using several examples common to the routine of the clinical radiation physicist. (orig.) [de

  18. TMI-2 accident evaluation program sample acquisition and examination plan. Executive summary

    International Nuclear Information System (INIS)

    Russell, M.L.; McCardell, R.K.; Broughton, J.M.

    1985-12-01

    The purpose of the TMI-2 Accident Evaluation Program Sample Acquisition and Examination (TMI-2 AEP SA and E) program is to develop and implement a test and inspection plan that completes the current-condition characterization of (a) the TMI-2 equipment that may have been damaged by the core damage events and (b) the TMI-2 core fission product inventory. The characterization program includes both sample acquisitions and examinations and in-situ measurements. Fission product characterization involves locating the fission products as well as determining their chemical form and determining material association

  19. Results from the Interim Salt Disposition Program Macrobatch 11 Tank 21H Acceptance Samples

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bannochie, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-11-13

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 21H in support of verification of Macrobatch (Salt Batch) 11 for the Interim Salt Disposition Program (ISDP) for processing. This document reports characterization data on the samples of Tank 21H and fulfills the requirements of Deliverable 3 of the Technical Task Request (TTR).

  20. Quanty4RIXS: a program for crystal field multiplet calculations of RIXS and RIXS-MCD spectra using Quanty.

    Science.gov (United States)

    Zimmermann, Patric; Green, Robert J; Haverkort, Maurits W; de Groot, Frank M F

    2018-05-01

    Some initial instructions for the Quanty4RIXS program written in MATLAB ® are provided. The program assists in the calculation of 1s 2p RIXS and 1s 2p RIXS-MCD spectra using Quanty. Furthermore, 1s XAS and 2p 3d RIXS calculations in different symmetries can also be performed. It includes the Hartree-Fock values for the Slater integrals and spin-orbit interactions for several 3d transition metal ions that are required to create the .lua scripts containing all necessary parameters and quantum mechanical definitions for the calculations. The program can be used free of charge and is designed to allow for further adjustments of the scripts. open access.

  1. Model calculations as one means of satisfying the neutron cross-section requirements of the CTR program

    International Nuclear Information System (INIS)

    Gardner, D.G.

    1975-01-01

    A large amount of cross section and spectral information for neutron-induced reactions will be required for the CTR design program. To undertake to provide the required data through a purely experimental measurement program alone may not be the most efficient way of attacking the problem. It is suggested that a preliminary theoretical calculation be made of all relevant reactions on the dozen or so elements that now seem to comprise the inventory of possible construction materials to find out which are actually important, and over what energy ranges they are important. A number of computer codes for calculating cross sections for neutron induced reactions have been evaluated and extended. These will be described and examples will be given of various types of calculations of interest to the CTR program. (U.S.)

  2. A Monte Carlo program to calculate the exposure rate from airborne radioactive gases inside a nuclear reactor containment building.

    Science.gov (United States)

    Sherbini, S; Tamasanis, D; Sykes, J; Porter, S W

    1986-12-01

    A program was developed to calculate the exposure rate resulting from airborne gases inside a reactor containment building. The calculations were performed at the location of a wall-mounted area radiation monitor. The program uses Monte Carlo techniques and accounts for both the direct and scattered components of the radiation field at the detector. The scattered component was found to contribute about 30% of the total exposure rate at 50 keV and dropped to about 7% at 2000 keV. The results of the calculations were normalized to unit activity per unit volume of air in the containment. This allows the exposure rate readings of the area monitor to be used to estimate the airborne activity in containment in the early phases of an accident. Such estimates, coupled with containment leak rates, provide a method to obtain a release rate for use in offsite dose projection calculations.

  3. Empirical equations of the solvent extraction of the energetic inputs, uranium and plutonium, calculated by using the program Microsoft Excel

    International Nuclear Information System (INIS)

    Bento, Dercio Lopes

    2006-01-01

    PUREX is one of the purification process for irradiated nuclear fuel. In the flowchart the program uses various uranium and plutonium extraction phases by using organic solvent contained in the aqueous phase obtained in the dissolution of the fuel element. A posterior extraction U and Pu are changed to the aqueous phase. So it is fundamental to know the distribution coefficient (dS), at the temperature (tc), of the substances among the two immiscible phases, for better calculation the suitable flowchart. A mathematical model was elaborated based on experimental data, for the calculation of the dS and applied to a referential band of substance concentrations in the aqueous phase (xS) and organic (yS). By using the program Excel, we personalized the empirical equations calculated by the root mean square. The relative deviation, among the calculated values and the experimental ones are the standards

  4. Environmental sampling program for a solar evaporation pond for liquid radioactive wastes

    International Nuclear Information System (INIS)

    Romero, R.; Gunderson, T.C.; Talley, A.D.

    1980-04-01

    Los Alamos Scientific Laboratory (LASL) is evaluating solar evaporation as a method for disposal of liquid radioactive wastes. This report describes a sampling program designed to monitor possible escape of radioactivity to the environment from a solar evaporation pond prototype constructed at LASL. Background radioactivity levels at the pond site were determined from soil and vegetation analyses before construction. When the pond is operative, the sampling program will qualitatively and quantitatively detect the transport of radioactivity to the soil, air, and vegetation in the vicinity. Possible correlation of meteorological data with sampling results is being investigated and measures to control export of radioactivity by biological vectors are being assessed

  5. Safety analysis report for packaging (onsite) transuranic performance demonstration program sample packaging

    International Nuclear Information System (INIS)

    Mccoy, J.C.

    1997-01-01

    The Transuranic Performance Demonstration Program (TPDP) sample packaging is used to transport highway route controlled quantities of weapons grade (WG) plutonium samples from the Plutonium Finishing Plant (PFP) to the Waste Receiving and Processing (WRAP) facility and back. The purpose of these shipments is to test the nondestructive assay equipment in the WRAP facility as part of the Nondestructive Waste Assay PDP. The PDP is part of the U. S. Department of Energy (DOE) National TRU Program managed by the U. S. Department of Energy, Carlsbad Area Office, Carlsbad, New Mexico. Details of this program are found in CAO-94-1045, Performance Demonstration Program Plan for Nondestructive Assay for the TRU Waste Characterization Program (CAO 1994); INEL-96/0129, Design of Benign Matrix Drums for the Non-Destructive Assay Performance Demonstration Program for the National TRU Program (INEL 1996a); and INEL-96/0245, Design of Phase 1 Radioactive Working Reference Materials for the Nondestructive Assay Performance Demonstration Program for the National TRU Program (INEL 1996b). Other program documentation is maintained by the national TRU program and each DOE site participating in the program. This safety analysis report for packaging (SARP) provides the analyses and evaluations necessary to demonstrate that the TRU PDP sample packaging meets the onsite transportation safety requirements of WHC-CM-2-14, Hazardous Material Packaging and Shipping, for an onsite Transportation Hazard Indicator (THI) 2 packaging. This SARP, however, does not include evaluation of any operations within the PFP or WRAP facilities, including handling, maintenance, storage, or operating requirements, except as they apply directly to transportation between the gate of PFP and the gate of the WRAP facility. All other activities are subject to the requirements of the facility safety analysis reports (FSAR) of the PFP or WRAP facility and requirements of the PDP

  6. Rio Blanco, Colorado, Long-Term Hydrologic Monitoring Program Sampling and Analysis Results for 2009

    International Nuclear Information System (INIS)

    2009-01-01

    The U.S. Department of Energy (DOE) Office of Legacy Management conducted annual sampling at the Rio Blanco, Colorado, Site, for the Long-Term Hydrologic Monitoring Program (LTHMP) on May 13 and 14, 2009. Samples were analyzed by the U.S. Environmental Protection Agency (EPA) Radiation&Indoor Environments National Laboratory in Las Vegas, Nevada. Samples were analyzed for gamma-emitting radionuclides by high-resolution gamma spectroscopy and tritium using the conventional and enriched methods.

  7. DEPDOSE: An interactive, microcomputer based program to calculate doses from exposure to radionuclides deposited on the ground

    International Nuclear Information System (INIS)

    Beres, D.A.; Hull, A.P.

    1991-12-01

    DEPDOSE is an interactive, menu driven, microcomputer based program designed to rapidly calculate committed dose from radionuclides deposited on the ground. The program is designed to require little or no computer expertise on the part of the user. The program consisting of a dose calculation section and a library maintenance section. These selections are available to the user from the main menu. The dose calculation section provides the user with the ability to calculate committed doses, determine the decay time needed to reach a particular dose, cross compare deposition data from separate locations, and approximate a committed dose based on a measured exposure rate. The library maintenance section allows the user to review and update dose modifier data as well as to build and maintain libraries of radionuclide data, dose conversion factors, and default deposition data. The program is structured to provide the user easy access for reviewing data prior to running the calculation. Deposition data can either be entered by the user or imported from other databases. Results can either be displayed on the screen or sent to the printer

  8. Application of the opportunities of tool system 'CUDA' for graphic processors programming in scientific and technical calculation tasks

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Sereda, T.M.; Us, S.A.; Shestakov, M.V.

    2009-01-01

    The opportunities of technology CUDA (Compute Unified Device Architecture - the unified hardware-software decision for parallel calculations on GPU)of the company NVIDIA were described. The basic differences of the programming language 'C' for GPU from 'usual' language 'C' were selected. The examples of CUDA usage for acceleration of development of applications and realization of algorithms of scientific and technical calculations were given which are carried out by the means of graphic processors (GPGPU) of accelerators GeForce of the eighth generation. The recommendations on optimization of the programs using GPU were resulted.

  9. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  10. A computer program for calculation of reliable pair distribution functions of non-crystalline materials from limited diffraction data. III

    International Nuclear Information System (INIS)

    Hansen, F.Y.

    1978-01-01

    This program calculates the final pair distribution functions of non-crystalline materials on the basis of the experimental structure factor as calculated in part I and the parameters of the small distance part of the pair distribution function as calculated in part II. In this way, truncation error may be eliminated from the final pair distribution function. The calculations with this program depend on the results of calculations with the programs described in parts I and II. The final pair distribution function is calculated by a Fourier transform of a combination of an experimental structure factor and a model structure factor. The storage requirement depends on the number of data points in the structure factor, the number of data points in the final pair distribution function and the number of peaks necessary to resolve the small distance part of the pair distribution function. In the present set-up a storage requirement is set to 8860 words which is estimated to be satisfactory for a large number of cases. (Auth.)

  11. Thermal-hydraulic Fortran program for steady-state calculations of plate-type fuel research reactors

    Directory of Open Access Journals (Sweden)

    Khedr Ahmed

    2008-01-01

    Full Text Available The safety assessment of research and power reactors is a continuous process covering their lifespan and requiring verified and validated codes. Power reactor codes all over the world are well established and qualified against real measuring data and qualified experimental facilities. These codes are usually sophisticated, require special skills and consume a lot of running time. On the other hand, most research reactor codes still require much more data for validation and qualification. It is, therefore, of benefit to any regulatory body to develop its own codes for the review and assessment of research reactors. The present paper introduces a simple, one-dimensional Fortran program called THDSN for steady-state thermal-hydraulic calculations of plate-type fuel research reactors. Besides calculating the fuel and coolant temperature distributions and pressure gradients in an average and hot channel, the program calculates the safety limits and margins against the critical phenomena encountered in research reactors, such as the onset of nucleate boiling, critical heat flux and flow instability. Well known thermal-hydraulic correlations for calculating the safety parameters and several formulas for the heat transfer coefficient have been used. The THDSN program was verified by comparing its results for 2 and 10 MW benchmark reactors with those published in IAEA publications and a good agreement was found. Also, the results of the program are compared with those published for other programs, such as the PARET and TERMIC.

  12. EPCARD (European Program Package for the Calculation of Aviation Route Doses). User's manual for version 3.2

    International Nuclear Information System (INIS)

    Schraube, H.; Leuthold, G.P.; Schraube, G.; Heinrich, W.; Roesler, S.; Mares, V.

    2002-01-01

    The GSF-National Research Center has developed the computer program EPCARD (European program package for the calculation of aviation route doses) jointly with scientists from Siegen University. With the program it is possible calculate the radiation dose obtained by individuals along any aviation route at flight altitudes between 5000 m and 25000 m, both in terms of ''ambient dose equivalent'' and ''effective dose''. Dose rates at any point in the atmosphere may be calculated for comparison with verification experiments, as well as simulated instrument readings, if the response characteristics of the instruments are known. The program fulfills the requirements of the European Council Directive 96/29/EURATOM and of the subsequent European national regulations. This report contains essentially all information, which is necessary to run EPCARDv3.2 from a standard PC. The program structure is depicted and the file structure described in detail, which permits to calculate the large number of data sets for the daily record keeping of airline crews and other frequently flying persons. Additionally, some information is given on the basic physical data, which is available from referenced publications. (orig.)

  13. A novel quantitative approach for eliminating sample-to-sample variation using a hue saturation value analysis program.

    Science.gov (United States)

    Yabusaki, Katsumi; Faits, Tyler; McMullen, Eri; Figueiredo, Jose Luiz; Aikawa, Masanori; Aikawa, Elena

    2014-01-01

    As computing technology and image analysis techniques have advanced, the practice of histology has grown from a purely qualitative method to one that is highly quantified. Current image analysis software is imprecise and prone to wide variation due to common artifacts and histological limitations. In order to minimize the impact of these artifacts, a more robust method for quantitative image analysis is required. Here we present a novel image analysis software, based on the hue saturation value color space, to be applied to a wide variety of histological stains and tissue types. By using hue, saturation, and value variables instead of the more common red, green, and blue variables, our software offers some distinct advantages over other commercially available programs. We tested the program by analyzing several common histological stains, performed on tissue sections that ranged from 4 µm to 10 µm in thickness, using both a red green blue color space and a hue saturation value color space. We demonstrated that our new software is a simple method for quantitative analysis of histological sections, which is highly robust to variations in section thickness, sectioning artifacts, and stain quality, eliminating sample-to-sample variation.

  14. Third version of a program for calculating the static interaction potential between an electron and a diatomic molecule

    International Nuclear Information System (INIS)

    Raseev, G.

    1980-01-01

    This program calculates the one-centre expansion of a two-centre wave function of a diatomic molecule and also the multipole expansion of its static interaction with a point charge. It is an extension to some classes of open-shell targets of the previous versions and it provides both the wave function and the potential in a form suitable for use in an electron-molecule scattering program. (orig./HSI)

  15. RepoSTAR. A Code package for control and evaluation of statistical calculations with the program package RepoTREND; RepoSTAR. Ein Codepaket zur Steuerung und Auswertung statistischer Rechenlaeufe mit dem Programmpaket RepoTREND

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Dirk-Alexander

    2016-05-15

    The program package RepoTREND for the integrated long terms safety analysis of final repositories allows besides deterministic studies of defined problems also statistical or probabilistic analyses. Probabilistic uncertainty and sensitivity analyses are realized in the program package repoTREND by a specific statistic frame called RepoSTAR. The report covers the following issues: concept, sampling and data supply of single simulations, evaluation of statistical calculations with the program RepoSUN.

  16. Calculational model used in the analysis of nuclear performance of the Light Water Breeder Reactor (LWBR) (LWBR Development Program)

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.B. (ed.)

    1978-08-01

    The calculational model used in the analysis of LWBR nuclear performance is described. The model was used to analyze the as-built core and predict core nuclear performance prior to core operation. The qualification of the nuclear model using experiments and calculational standards is described. Features of the model include: an automated system of processing manufacturing data; an extensively analyzed nuclear data library; an accurate resonance integral calculation; space-energy corrections to infinite medium cross sections; an explicit three-dimensional diffusion-depletion calculation; a transport calculation for high energy neutrons; explicit accounting for fuel and moderator temperature feedback, clad diameter shrinkage, and fuel pellet growth; and an extensive testing program against experiments and a highly developed analytical standard.

  17. Burn-Up Calculation of the Fuel Element in RSG-GAS Reactor using Program Package BATAN-FUEL

    International Nuclear Information System (INIS)

    Mochamad Imron; Ariyawan Sunardi

    2012-01-01

    Calculation of burn lip distribution of 2.96 gr U/cc Silicide fuel element at the 78 th reactor cycle using computer code program of BATAN-FUEL has been done. This calculation uses inputs such as generated power, operation time and a core assumption model of 5/1. Using this calculation model burn up for the entire fuel elements at the reactor core are able to be calculated. From the calculation it is obtained that the minimum burn up of 6.82% is RI-50 at the position of A-9, while the maximum burn up of 57.57% is RI 467 at the position of 8-7. Based on the safety criteria as specified in the Safety Analysis Report (SAR) RSG-GAS reactor, the maximum fuel burn up allowed is 59.59%. It then can be concluded that pattern that elements placement at the reactor core are properly and optimally done. (author)

  18. Interactive programs with preschool children bring smiles and conversation to older adults: time-sampling study.

    Science.gov (United States)

    Morita, Kumiko; Kobayashi, Minako

    2013-10-18

    Keeping older adults healthy and active is an emerging challenge of an aging society. Despite the importance of personal relationships to their health and well-being, changes in family structure have resulted in a lower frequency of intergenerational interactions. Limited studies have been conducted to compare different interaction style of intergenerational interaction. The present study aimed to compare the changes in visual attention, facial expression, engagement/behaviour, and intergenerational conversation in older adults brought about by a performance-based intergenerational (IG) program and a social-oriented IG program to determine a desirable interaction style for older adults. The subjects of this study were 25 older adults who participated in intergenerational programs with preschool children aged 5 to 6 years at an adult day care centre in Tokyo. We used time sampling to perform a structured observation study. The 25 older participants of intergenerational programs were divided into two groups based on their interaction style: performance-based IG program (children sing songs and dance) and social-oriented IG program (older adults and children play games together). Based on the 5-minute video observation, we compared changes in visual attention, facial expression, engagement/behaviour, and intergenerational conversation between the performance-based and social-oriented IG programs. Constructive behaviour and intergenerational conversation were significantly higher in the social-oriented IG programming group than the performance-based IG programming group (pprogramming group than the performance-based IG programming (pprogramming group than the social-oriented IG programming group (pprograms with preschool children brought smiles and conversation to older adults. The social-oriented IG program allowed older adults to play more roles than the performance-based IG program. The intergenerational programs provide opportunities to fulfil basic human needs and

  19. NASA Lunar Sample Education Disk Program - Space Rocks for Classrooms, Museums, Science Centers and Libraries

    Science.gov (United States)

    Allen, J. S.

    2009-12-01

    NASA is eager for students and the public to experience lunar Apollo rocks and regolith soils first hand. Lunar samples embedded in plastic are available for educators to use in their classrooms, museums, science centers, and public libraries for education activities and display. The sample education disks are valuable tools for engaging students in the exploration of the Solar System. Scientific research conducted on the Apollo rocks has revealed the early history of our Earth-Moon system. The rocks help educators make the connections to this ancient history of our planet as well as connections to the basic lunar surface processes - impact and volcanism. With these samples educators in museums, science centers, libraries, and classrooms can help students and the public understand the key questions pursued by missions to Moon. The Office of the Curator at Johnson Space Center is in the process of reorganizing and renewing the Lunar and Meteorite Sample Education Disk Program to increase reach, security and accountability. The new program expands the reach of these exciting extraterrestrial rocks through increased access to training and educator borrowing. One of the expanded opportunities is that trained certified educators from science centers, museums, and libraries may now borrow the extraterrestrial rock samples. Previously the loan program was only open to classroom educators so the expansion will increase the public access to the samples and allow educators to make the critical connections of the rocks to the exciting exploration missions taking place in our solar system. Each Lunar Disk contains three lunar rocks and three regolith soils embedded in Lucite. The anorthosite sample is a part of the magma ocean formed on the surface of Moon in the early melting period, the basalt is part of the extensive lunar mare lava flows, and the breccias sample is an important example of the violent impact history of the Moon. The disks also include two regolith soils and

  20. Variability of carotid artery measurements on 3-Tesla MRI and its impact on sample size calculation for clinical research.

    Science.gov (United States)

    Syed, Mushabbar A; Oshinski, John N; Kitchen, Charles; Ali, Arshad; Charnigo, Richard J; Quyyumi, Arshed A

    2009-08-01

    Carotid MRI measurements are increasingly being employed in research studies for atherosclerosis imaging. The majority of carotid imaging studies use 1.5 T MRI. Our objective was to investigate intra-observer and inter-observer variability in carotid measurements using high resolution 3 T MRI. We performed 3 T carotid MRI on 10 patients (age 56 +/- 8 years, 7 male) with atherosclerosis risk factors and ultrasound intima-media thickness > or =0.6 mm. A total of 20 transverse images of both right and left carotid arteries were acquired using T2 weighted black-blood sequence. The lumen and outer wall of the common carotid and internal carotid arteries were manually traced; vessel wall area, vessel wall volume, and average wall thickness measurements were then assessed for intra-observer and inter-observer variability. Pearson and intraclass correlations were used in these assessments, along with Bland-Altman plots. For inter-observer variability, Pearson correlations ranged from 0.936 to 0.996 and intraclass correlations from 0.927 to 0.991. For intra-observer variability, Pearson correlations ranged from 0.934 to 0.954 and intraclass correlations from 0.831 to 0.948. Calculations showed that inter-observer variability and other sources of error would inflate sample size requirements for a clinical trial by no more than 7.9%, indicating that 3 T MRI is nearly optimal in this respect. In patients with subclinical atherosclerosis, 3 T carotid MRI measurements are highly reproducible and have important implications for clinical trial design.

  1. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  2. Technical basis and evaluation criteria for an air sampling/monitoring program

    International Nuclear Information System (INIS)

    Gregory, D.C.; Bryan, W.L.; Falter, K.G.

    1993-01-01

    Air sampling and monitoring programs at DOE facilities need to be reviewed in light of revised requirements and guidance found in, for example, DOE Order 5480.6 (RadCon Manual). Accordingly, the Oak Ridge National Laboratory (ORNL) air monitoring program is being revised and placed on a sound technical basis. A draft technical basis document has been written to establish placement criteria for instruments and to guide the ''retrospective sampling or real-time monitoring'' decision. Facility evaluations are being used to document air sampling/monitoring needs, and instruments are being evaluated in light of these needs. The steps used to develop this program and the technical basis for instrument placement are described

  3. A program for monitor unit calculation for high energy photon beams in isocentric condition based on measured data

    International Nuclear Information System (INIS)

    Gesheva-Atanasova, N.

    2008-01-01

    The aim of this study is: 1) to propose a procedure and a program for monitor unit calculation for radiation therapy with high energy photon beams, based on data measured by author; 2) to compare this data with published one and 3) to evaluate the precision of the monitor unit calculation program. From this study it could be concluded that, we reproduced with a good agreement the published data, except the TPR values for dept up to 5 cm. The measured relative weight of upper and lower jaws - parameter A was dramatically different from the published data, but perfectly described the collimator exchange effect for our treatment machine. No difference was found between the head scatter ratios, measured in a mini phantom and those measured with a proper brass buildup cap. Our monitor unit calculation program was found to be reliable and it can be applied for check up of the patient's plans for irradiation with high energy photon beams and for some fast calculations. Because of the identity in the construction, design and characteristics of the Siemens accelerators, and the agreement with the published data for the same beam qualities, we hope that most of our experimental data and this program can be used after verification in other hospitals

  4. ERATO - a computer program for the calculation of induced eddy-currents in three-dimensional conductive structures

    International Nuclear Information System (INIS)

    Benner, J.

    1985-10-01

    The computer code ERATO is used for the calculation of eddy-currents in three-dimensional conductive structures and their secondary magnetic field. ERATO is a revised version of the code FEDIFF, developed at IPP Garching. For the calculation the Finite-Element-Network (FEN) method is used, where the structure is simulated by an equivalent electric network. In the ERATO-code, the calculation of the finite-element discretization, the eddy-current analysis, and the final evaluation of the results are done in separate programs. So the eddy-current analysis as the central step is perfectly independent of a special geometry. For the finite-element discretization there are two so called preprocessors, which treat a torus-segment and a rectangular, flat plate. For the final evaluation postprocessors are used, by which the current-distributions can be printed and plotted. In the report, the theoretical foundation of the FEN-Method is discussed, the structure and the application of the programs (preprocessors, analysis-program, postprocessors, supporting programs) are shown, and two examples for calculations are presented. (orig.) [de

  5. TISCON, a BASIC computer program for the calculation of the biodistribution of radionuclide-labelled drugs in rats and mice

    International Nuclear Information System (INIS)

    Maddalena, D.J.

    1983-09-01

    Animal biodistribution studies on radionuclide-labelled drugs are labour-intensive and time-consuming. A method for rapidly carrying out these studies on rats and mice is presented. An interactive computer program, written in BASIC, is used to calculate parameters of interest, such as per cent injected dose (%ID),%ID per gram and target to non-target ratios

  6. Program MCU for Monte-Carlo calculations of neutron-physical characteristics of nuclear reactors

    International Nuclear Information System (INIS)

    Abagyan, L.P.; Alekseev, N.I.; Bryzgalov, V.I.; Glushkov, A.E.; Gomin, E.A.; Gurevich, M.I.; Kalugin, M.A.; Majorov, L.V.; Marin, S.V.; Yhdkevich, M.S.

    1994-01-01

    A description of the MCU data modification is presented. The calculation results by the MCU-2 and MCU-3 codes are compared for the critical assemblies of a different reactor types. The full list of the critical assemblies calculation results obtained by all MCU code versions is given. 32 refs.; 32 tabs

  7. Sample triage : an overview of Environment Canada's program

    Energy Technology Data Exchange (ETDEWEB)

    Lambert, P.; Goldthorp, M.; Fingas, M. [Environment Canada, Ottawa, ON (Canada). Emergencies Science and Technology Division, Environmental Technology Centre, Science and Technology Branch

    2006-07-01

    The Chemical, biological and radiological/nuclear Research and Technology Initiative (CRTI) is a program led by Canada's Department of National Defence in an effort to improve the capability of providing technical and analytical support in the event of a terrorist-related event. This paper summarized the findings from the CRTI Sample Triage Working Group and reviewed information on Environment Canada's triage program and its' mobile sample inspection facility that was designed to help examine samples of hazardous materials in a controlled environment to minimize the risk of exposure. A sample triage program is designed to deal with administrative, health and safety issues by facilitating the safe transfer of samples to an analytical laboratory. It refers to the collation of all results including field screening information, intelligence and observations for the purpose of prioritizing and directing the sample to the appropriate laboratory for analysis. A central component of Environment Canada's Emergency Response Program has been its capacity to respond on site during an oil or chemical spill. As such, the Emergencies Science and Technology Division has acquired a new mobile sample inspection facility in 2004. It is constructed to work with a custom designed decontamination unit and Ford F450 tow vehicle. The criteria and general design of the trailer facility was described. This paper also outlined the steps taken following a spill of hazardous materials into the environment so that potentially dangerous samples could be safety assessed. Several field trials will be carried out in order to develop standard operating procedures for the mobile sample inspection facility. 6 refs., 6 figs., 4 appendices.

  8. A comparison of fitness-case sampling methods for genetic programming

    Science.gov (United States)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  9. Radioactive cloud dose calculations

    International Nuclear Information System (INIS)

    Healy, J.W.

    1984-01-01

    Radiological dosage principles, as well as methods for calculating external and internal dose rates, following dispersion and deposition of radioactive materials in the atmosphere are described. Emphasis has been placed on analytical solutions that are appropriate for hand calculations. In addition, the methods for calculating dose rates from ingestion are discussed. A brief description of several computer programs are included for information on radionuclides. There has been no attempt to be comprehensive, and only a sampling of programs has been selected to illustrate the variety available

  10. Calculation of Absorbed Glandular Dose using a FORTRAN Program Based on Monte Carlo X-ray Spectra in Mammography

    Directory of Open Access Journals (Sweden)

    Ali Asghar Mowlavi

    2011-03-01

    Full Text Available Introduction: Average glandular dose calculation in mammography with Mo-Rh target-filter and dose calculation for different situations is accurate and fast. Material and Methods: In this research, first of all, x-ray spectra of a Mo target bombarded by a 28 keV electron beam with and without a Rh filter were calculated using the MCNP code. Then, we used the Sobol-Wu parameters to write a FORTRAN code to calculate average glandular dose. Results: Average glandular dose variation was calculated against the voltage of the mammographic x-ray tube for d = 5 cm, HVL= 0.35 mm Al, and different value of g. Also, the results related to average glandular absorbed dose variation per unit roentgen radiation against the glandular fraction of breast tissue for kV = 28 and HVL = 0.400 mmAl and different values of d are presented. Finally, average glandular dose against d for g = 60% and three values of kV (23, 27, 35 kV with corresponding HVLs have been calculated. Discussion and Conclusion: The absorbed dose computational program is accurate, complete, fast and user friendly. This program can be used for optimization of exposure dose in mammography. Also, the results of this research are in good agreement with the computational results of others.

  11. User's guide to SERICPAC: A computer program for calculating electric-utility avoided costs rates

    Energy Technology Data Exchange (ETDEWEB)

    Wirtshafter, R.; Abrash, M.; Koved, M.; Feldman, S.

    1982-05-01

    SERICPAC is a computer program developed to calculate average avoided cost rates for decentralized power producers and cogenerators that sell electricity to electric utilities. SERICPAC works in tandem with SERICOST, a program to calculate avoided costs, and determines the appropriate rates for buying and selling of electricity from electric utilities to qualifying facilities (QF) as stipulated under Section 210 of PURA. SERICPAC contains simulation models for eight technologies including wind, hydro, biogas, and cogeneration. The simulations are converted in a diversified utility production which can be either gross production or net production, which accounts for an internal electricity usage by the QF. The program allows for adjustments to the production to be made for scheduled and forced outages. The final output of the model is a technology-specific average annual rate. The report contains a description of the technologies and the simulations as well as complete user's guide to SERICPAC.

  12. Calculation of thermal neutron self-shielding correction factors for aqueous bulk sample prompt gamma neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Nasrabadi, M.N.; Jalali, M.; Mohammadi, A.

    2007-01-01

    In this work thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing materials is studied using bulk sample prompt gamma neutron activation analysis (BSPGNAA) with the MCNP code. The code was used to perform three dimensional simulations of a neutron source, neutron detector and sample of various material compositions. The MCNP model was validated against experimental measurements of the neutron flux performed using a BF 3 detector. Simulations were performed to predict thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing solutes. In practice, the MCNP calculations are combined with experimental measurements of the relative thermal neutron flux over the sample's surface, with respect to a reference water sample, to derive the thermal neutron self-shielding within the sample. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the average thermal neutron flux within the sample volume is required

  13. PYFLOW_2.0: a computer program for calculating flow properties and impact parameters of past dilute pyroclastic density currents based on field data

    Science.gov (United States)

    Dioguardi, Fabio; Mele, Daniela

    2018-03-01

    This paper presents PYFLOW_2.0, a hazard tool for the calculation of the impact parameters of dilute pyroclastic density currents (DPDCs). DPDCs represent the dilute turbulent type of gravity flows that occur during explosive volcanic eruptions; their hazard is the result of their mobility and the capability to laterally impact buildings and infrastructures and to transport variable amounts of volcanic ash along the path. Starting from data coming from the analysis of deposits formed by DPDCs, PYFLOW_2.0 calculates the flow properties (e.g., velocity, bulk density, thickness) and impact parameters (dynamic pressure, deposition time) at the location of the sampled outcrop. Given the inherent uncertainties related to sampling, laboratory analyses, and modeling assumptions, the program provides ranges of variations and probability density functions of the impact parameters rather than single specific values; from these functions, the user can interrogate the program to obtain the value of the computed impact parameter at any specified exceedance probability. In this paper, the sedimentological models implemented in PYFLOW_2.0 are presented, program functionalities are briefly introduced, and two application examples are discussed so as to show the capabilities of the software in quantifying the impact of the analyzed DPDCs in terms of dynamic pressure, volcanic ash concentration, and residence time in the atmosphere. The software and user's manual are made available as a downloadable electronic supplement.

  14. A new calculation method adapted to the experimental conditions for determining samples γ-activities induced by 14 MeV neutrons

    International Nuclear Information System (INIS)

    Rzama, A.; Erramli, H.; Misdaq, M.A.

    1994-01-01

    Induced gamma-activities of different disk shaped irradiated samples and standards with 14 MeV neutrons have been determined by using a Monte Carlo calculation method adapted to the experimental conditions. The self-absorption of the multienergetic emitted gamma rays has been taken into account in the final samples activities. The influence of the different activation parameters has been studied. Na, K, Cl and P contents in biological (red beet) samples have been determined. ((orig.))

  15. Evaluation of Brazilian intercomparison program data from 1991 to 1995 of radionuclide assays in environmental samples

    International Nuclear Information System (INIS)

    Vianna, Maria Elizabeth Couto M.; Tauhata, Luiz; Oliveira, Antonio Eduardo de; Oliveira, Josue Peter de; Clain, Almir Faria; Ferreira, Ana Cristina M.

    1998-01-01

    Historical radioanalytical data from the Institute of Radiation Protection and Dosimetry (IRD) national intercomparison program from 1991 to 1995 were analyzed to evaluate the performance of sixteen Brazilian laboratories in radionuclide analyses in environmental samples. Data are comprised of measurements of radionuclides in 435 spiked environmental samples distributed in fifteen intercomparison runs comprised of 955 analyses. The general and specific radionuclide performances of the participating laboratories were evaluated relative to the reference value. Data analysis encourages improvements in beta emitter measurements

  16. QEDMOD: Fortran program for calculating the model Lamb-shift operator

    Science.gov (United States)

    Shabaev, V. M.; Tupitsyn, I. I.; Yerokhin, V. A.

    2018-02-01

    We present Fortran package QEDMOD for computing the model QED operator hQED that can be used to account for the Lamb shift in accurate atomic-structure calculations. The package routines calculate the matrix elements of hQED with the user-specified one-electron wave functions. The operator can be used to calculate Lamb shift in many-electron atomic systems with a typical accuracy of few percent, either by evaluating the matrix element of hQED with the many-electron wave function, or by adding hQED to the Dirac-Coulomb-Breit Hamiltonian.

  17. FIST - a suite of X-ray powder crystallography programs for use with a HP-65 calculator

    International Nuclear Information System (INIS)

    Ferguson, I.F.; Turek, M.

    1977-12-01

    Programs for X-ray powder crystallography are defined for use with a Hewlett Packard HP-65 (programmable) pocket calculator. These include the prediction of all Bragg reflections for defined P-, F-, I-cubic, tetragonal, hexagonal and orthorhombic cells; the calculation of the position of a specific Bragg reflection from defined unit cells with all symmetries except triclinic; interconversion of theta, 2theta, sin 2 theta and d, as well as the calculation of the Nelson-Riley function; the computation of crystal densities; the interconversion of rhombohedral and hexagonal unit cells, lsub(c) determinations for graphite, the calculation of a and c for boron carbide; and Miller index transformations between various unit cells. (author)

  18. Lunar and Meteorite Sample Education Disk Program — Space Rocks for Classrooms, Museums, Science Centers, and Libraries

    Science.gov (United States)

    Allen, J.; Luckey, M.; McInturff, B.; Huynh, P.; Tobola, K.; Loftin, L.

    2010-03-01

    NASA’s Lunar and Meteorite Sample Education Disk Program has Lucite disks containing Apollo lunar samples and meteorite samples that are available for trained educators to borrow for use in classrooms, museums, science center, and libraries.

  19. Development and verification of an excel program for calculation of monitor units for tangential breast irradiation with external photon beams

    International Nuclear Information System (INIS)

    Woldemariyam, M.G.

    2015-07-01

    The accuracy of MU calculation performed with Prowess Panther TPS (for Co-60) and Oncentra (for 6MV and 15MV x-rays) for tangential breast irradiation was evaluated with measurements made in an anthropomorphic phantom using calibrated Gafchromic EBT2 films. Excel programme which takes in to account external body surface irregularity of an intact breast or chest wall (hence absence of full scatter condition) using Clarkson’s sector summation technique was developed. A single surface contour of the patient obtained in a transverse plane containing the MU calculation point was required for effective implementation of the programme. The outputs of the Excel programme were validated with the respective outputs from the 3D treatment planning systems. The variations between the measured point doses and their calculated counterparts by the TPSs were within the range of -4.74% to 4.52% (mean of -1.33% and SD of 2.69) for the prowess panther TPS and -4.42% to 3.14% (mean of -1.47% and SD of -3.95) for the Oncentra TPS. The observed degree of deviation may be attributed to limitations of the dose calculation algorithm within the TPSs, set up inaccuracies of the phantom during irradiation and inherent uncertainties associated with radiochromic film dosimetry. The percentage deviations between MUs calculated with the two TPSs and the Excel program were within the range of -3.45% and 3.82% (mean of 0.83% and SD of 2.25). The observed percentage deviations are within the 4% action level recommended by TG-114. This indicates that the Excel program can be confidently employed for calculation of MUs for 2D planned tangential breast irradiations or to independently verify MUs calculated with another calculation methods. (au)

  20. Mathematical simulation of processes in horizontal steam generator and the program of calculation of its characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Titov, V.F.; Zorin, V.M.; Gorburov, V.I. [OKB Gidropress, Moscow Energy Inst. (Russian Federation)

    1995-12-31

    On the basis of mathematical models describing the processes in horizontal steam generator (SG) the code giving the possibility to calculate the hydrodynamical characteristics in any point of water volume, has been developed. The code simulates the processes in SG in the stationary (or quasi-stationary) mode or operation only. The code may be used as a next step to calculations of the SG characteristics in the non-stationary modes of operation.

  1. Mathematical simulation of processes in horizontal steam generator and the program of calculation of its characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Titov, V F; Zorin, V M; Gorburov, V I [OKB Gidropress, Moscow Energy Inst. (Russian Federation)

    1996-12-31

    On the basis of mathematical models describing the processes in horizontal steam generator (SG) the code giving the possibility to calculate the hydrodynamical characteristics in any point of water volume, has been developed. The code simulates the processes in SG in the stationary (or quasi-stationary) mode or operation only. The code may be used as a next step to calculations of the SG characteristics in the non-stationary modes of operation.

  2. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  3. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    Science.gov (United States)

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  4. Key Design Considerations When Calculating Cost Savings for Population Health Management Programs in an Observational Setting.

    Science.gov (United States)

    Murphy, Shannon M E; Hough, Douglas E; Sylvia, Martha L; Dunbar, Linda J; Frick, Kevin D

    2018-02-08

    To illustrate the impact of key quasi-experimental design elements on cost savings measurement for population health management (PHM) programs. Population health management program records and Medicaid claims and enrollment data from December 2011 through March 2016. The study uses a difference-in-difference design to compare changes in cost and utilization outcomes between program participants and propensity score-matched nonparticipants. Comparisons of measured savings are made based on (1) stable versus dynamic population enrollment and (2) all eligible versus enrolled-only participant definitions. Options for the operationalization of time are also discussed. Individual-level Medicaid administrative and claims data and PHM program records are used to match study groups on baseline risk factors and assess changes in costs and utilization. Savings estimates are statistically similar but smaller in magnitude when eliminating variability based on duration of population enrollment and when evaluating program impact on the entire target population. Measurement in calendar time, when possible, simplifies interpretability. Program evaluation design elements, including population stability and participant definitions, can influence the estimated magnitude of program savings for the payer and should be considered carefully. Time specifications can also affect interpretability and usefulness. © Health Research and Educational Trust.

  5. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  6. Methods for collecting algal samples as part of the National Water-Quality Assessment Program

    Science.gov (United States)

    Porter, Stephen D.; Cuffney, Thomas F.; Gurtz, Martin E.; Meador, Michael R.

    1993-01-01

    Benthic algae (periphyton) and phytoplankton communities are characterized in the U.S. Geological Survey's National Water-Quality Assessment Program as part of an integrated physical, chemical, and biological assessment of the Nation's water quality. This multidisciplinary approach provides multiple lines of evidence for evaluating water-quality status and trends, and for refining an understanding of the factors that affect water-quality conditions locally, regionally, and nationally. Water quality can be characterized by evaluating the results of qualitative and quantitative measurements of the algal community. Qualitative periphyton samples are collected to develop of list of taxa present in the sampling reach. Quantitative periphyton samples are collected to measure algal community structure within selected habitats. These samples of benthic algal communities are collected from natural substrates, using the sampling methods that are most appropriate for the habitat conditions. Phytoplankton samples may be collected in large nonwadeable streams and rivers to meet specific program objectives. Estimates of algal biomass (chlorophyll content and ash-free dry mass) also are optional measures that may be useful for interpreting water-quality conditions. A nationally consistent approach provides guidance on site, reach, and habitat selection, as well as information on methods and equipment for qualitative and quantitative sampling. Appropriate quality-assurance and quality-control guidelines are used to maximize the ability to analyze data locally, regionally, and nationally.

  7. Preparation and validation of gross alpha/beta samples used in EML's quality assessment program

    International Nuclear Information System (INIS)

    Scarpitta, S.C.

    1997-10-01

    A set of water and filter samples have been incorporated into the existing Environmental Measurements Laboratory's (EML) Quality Assessment Program (QAP) for gross alpha/beta determinations by participating DOE laboratories. The participating laboratories are evaluated by comparing their results with the EML value. The preferred EML method for measuring water and filter samples, described in this report, uses gas flow proportional counters with 2 in. detectors. Procedures for sample preparation, quality control and instrument calibration are presented. Liquid scintillation (LS) counting is an alternative technique that is suitable for quantifying both the alpha ( 241 Am, 230 Th and 238 Pu) and beta ( 90 Sr/ 90 Y) activity concentrations in the solutions used to prepare the QAP water and air filter samples. Three LS counting techniques (Cerenkov, dual dpm and full spectrum analysis) are compared. These techniques may be used to validate the activity concentrations of each component in the alpha/beta solution before the QAP samples are actually prepared

  8. Construction of PWR nuclear cross sections for transient calculations. Test of the ANTI program against TWODIM

    International Nuclear Information System (INIS)

    Thorlaksen, B.

    1981-05-01

    Nuclear cross sections for fuel assemblies of the more recent Westinghouse designs, representing two different PWR reactor cores, are calculated as functions of average fuel temperature, moderator density, and moderator poison concentration. The cross-section functions are verified by referring to Westinghouse power-shape calculations and other analysis. Computations on the side reflector resulted in significantly higher albedo values than used previously for BWR's in similar nodal codes. This led to an investigation of the influence of the internodal coupling coefficients on the power shape. It is concluded that the calculated power shape is strongly dependent, on the choise of coupling coefficients. However, it is shown that ''the correct'' set of coupling coefficients depends mostly on the nodal configuration, and that it is fairly independent of the power condition. (author)

  9. Program LATTICE for Calculation of Parameters of Targets with Heterogeneous (Lattice) Structure

    CERN Document Server

    Bznuni, S A; Soloviev, A G; Sosnin, A N

    2002-01-01

    Program LATTICE, with which help it is possible to describe lattice structure for the program complex CASCAD, is created in the C++ language. It is shown that for model-based electronuclear system on a basis of molten salt reactor with graphite moderator at transition from homogeneous structure to heterogeneous at preservation of a chemical compound there is a growth of k_{eff} by approximately 6 %.

  10. Comparison of CFD-calculations of centrifugal compressor stages by NUMECA Fine Turbo and ANSYS CFX programs

    Science.gov (United States)

    Galerkin, Y. B.; Voinov, I. B.; Drozdov, A. A.

    2017-08-01

    Computational Fluid Dynamics (CFD) methods are widely used for centrifugal compressors design and flow analysis. The calculation results are dependent on the chosen software, turbulence models and solver settings. Two of the most widely applicable programs are NUMECA Fine Turbo and ANSYS CFX. The objects of the study were two different stages. CFD-calculations were made for a single blade channel and for full 360-degree flow paths. Stage 1 with 3D impeller and vaneless diffuser was tested experimentally. Its flow coefficient is 0.08 and loading factor is 0.74. For stage 1 calculations were performed with different grid quality, a different number of cells and different models of turbulence. The best results have demonstrated the Spalart-Allmaras model and mesh with 1.854 million cells. Stage 2 with return channel, vaneless diffuser and 3D impeller with flow coefficient 0.15 and loading factor 0.5 was designed by the known Universal Modeling Method. Its performances were calculated by the well identified Math model. Stage 2 performances by CFD calculations shift to higher flow rate in comparison with design performances. The same result was obtained for stage 1 in comparison with measured performances. Calculated loading factor is higher in both cases for a single blade channel. Loading factor performance calculated for full flow path (“360 degrees”) by ANSYS CFX is in satisfactory agreement with the stage 2 design performance. Maximum efficiency is predicted accurately by the ANSYS CFX “360 degrees” calculation. “Sector” calculation is less accurate. Further research is needed to solve the problem of performances mismatch.

  11. HETERO code, heterogeneous procedure for reactor calculation; Program Hetero, heterogeni postupak proracuna reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Jovanovic, S M; Raisic, N M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)

    1966-11-15

    This report describes the procedure for calculating the parameters of heterogeneous reactor system taking into account the interaction between fuel elements related to established geometry. First part contains the analysis of single fuel element in a diffusion medium, and criticality condition of the reactor system described by superposition of elements interactions. the possibility of performing such analysis by determination of heterogeneous system lattice is described in the second part. Computer code HETERO with the code KETAP (calculation of criticality factor {eta}{sub n} and flux distribution) is part of this report together with the example of RB reactor square lattice.

  12. BOKASUN: A fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams

    Science.gov (United States)

    Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore

    2009-03-01

    We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending

  13. First-trimester risk calculation for trisomy 13, 18, and 21: comparison of the screening efficiency between 2 locally developed programs and commercial software

    DEFF Research Database (Denmark)

    Sørensen, Steen; Momsen, Günther; Sundberg, Karin

    2011-01-01

    -A) in maternal plasma from unaffected pregnancies. Means and SDs of these parameters in unaffected and affected pregnancies are used in the risk calculation program. Unfortunately, our commercial program for risk calculation (Astraia) did not allow use of local medians. We developed 2 alternative risk...... calculation programs to assess whether the screening efficacies for T13, T18, and T21 could be improved by using our locally estimated medians....

  14. Calculation of the distribution of the escaping from a fissionable sample neutrons number when introducing one fission neutron in that sample

    International Nuclear Information System (INIS)

    Dorlet, J.

    1991-01-01

    A describing algorithm furnishes the probabilities of having exactly N escaping neutrons in the descent of one fission neutron, using the punctual reactor model. Calculations can be performed even for N-values greater than 1000. Numerical results show that discrete neutrons counting is unfit to obtain the N mean value, even for very far subcritical devices. That mean value is still very used in the existing theoretical studies, because its obvious correlation with the device effective multiplication coefficient. For that reason modelling coincidence neutrons counting is not suitable using statistical moments approach, but only using probabilities it selves [fr

  15. QM/MM hybrid calculation of biological macromolecules using a new interface program connecting QM and MM engines

    Energy Technology Data Exchange (ETDEWEB)

    Hagiwara, Yohsuke; Tateno, Masaru [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba Science City, Ibaraki 305-8571 (Japan); Ohta, Takehiro [Center for Computational Sciences, University of Tsukuba, Tennodai 1-1-1, Tsukuba Science City, Ibaraki 305-8577 (Japan)], E-mail: tateno@ccs.tsukuba.ac.jp

    2009-02-11

    An interface program connecting a quantum mechanics (QM) calculation engine, GAMESS, and a molecular mechanics (MM) calculation engine, AMBER, has been developed for QM/MM hybrid calculations. A protein-DNA complex is used as a test system to investigate the following two types of QM/MM schemes. In a 'subtractive' scheme, electrostatic interactions between QM/MM regions are truncated in QM calculations; in an 'additive' scheme, long-range electrostatic interactions within a cut-off distance from QM regions are introduced into one-electron integration terms of a QM Hamiltonian. In these calculations, 338 atoms are assigned as QM atoms using Hartree-Fock (HF)/density functional theory (DFT) hybrid all-electron calculations. By comparing the results of the additive and subtractive schemes, it is found that electronic structures are perturbed significantly by the introduction of MM partial charges surrounding QM regions, suggesting that biological processes occurring in functional sites are modulated by the surrounding structures. This also indicates that the effects of long-range electrostatic interactions involved in the QM Hamiltonian are crucial for accurate descriptions of electronic structures of biological macromolecules.

  16. The Calculation of Standard Enthalpies of Formation of Alkanes: Illustrating Molecular Mechanics and Spreadsheet Programs

    Science.gov (United States)

    Hawk, Eric Leigh

    1999-02-01

    How group increment methods may be used to predict standard enthalpies of formation of alkanes is outlined as an undergraduate computational chemistry experiment. The experiment requires input and output data sets. Although users may create their own data sets, both sets are provided. The input data set contains experimentally determined gas-phase standard enthalpies of formation and calculated steric energies for 10 alkanes. The steric energy for an alkane is calculated via a Molecular Mechanics approach employing Allinger's MM3 force field. Linear regression analysis on data contained in the input data set generates the coefficients that are used with the output data set to calculate standard enthalpies of formation for 15 alkanes. The average absolute error for the calculated standard enthalpies of formation is 1.22 kcal/mol. The experiment is highly suited to those interested in incorporating more computational chemistry in their curricula. In this regard, it is ideally suited for a physical chemistry laboratory, but it may be used in an organic chemistry course as well.

  17. Using the Wolfsberg--Schactschneider program to calculate equilibrium constants for isotopic acetylenes

    International Nuclear Information System (INIS)

    Liu, D.K.K.; Pyper, J.W.

    1977-01-01

    Equilibrium constants were calculated for the gas-phase isotopic exchange reactions C 2 H 2 + C 2 D 2 = 2C 2 HD and C 2 H 2 + D 2 O = C 2 D 2 + H 2 O at temperatures ranging from 40 to 2000 0 K. No corrections to the harmonic approximation were made. The results agree quite well with experimental measurements

  18. Elasto-plastic benchmark calculations. Step 1: verification of the numerical accuracy of the computer programs

    International Nuclear Information System (INIS)

    Corsi, F.

    1985-01-01

    In connection with the design of nuclear reactors components operating at elevated temperature, design criteria need a level of realism in the prediction of inelastic structural behaviour. This concept leads to the necessity of developing non linear computer programmes, and, as a consequence, to the problems of verification and qualification of these tools. Benchmark calculations allow to carry out these two actions, involving at the same time an increased level of confidence in complex phenomena analysis and in inelastic design calculations. With the financial and programmatic support of the Commission of the European Communities (CEE) a programme of elasto-plastic benchmark calculations relevant to the design of structural components for LMFBR has been undertaken by those Member States which are developing a fast reactor project. Four principal progressive aims were initially pointed out that brought to the decision to subdivide the Benchmark effort in a calculations series of four sequential steps: step 1 to 4. The present document tries to summarize Step 1 of the Benchmark exercise, to derive some conclusions on Step 1 by comparison of the results obtained with the various codes and to point out some concluding comments on the first action. It is to point out that even if the work was designed to test the capabilities of the computer codes, another aim was to increase the skill of the users concerned

  19. The utilization of Quabox/Cubox computer program for calculating Angra 1 Reactor core

    International Nuclear Information System (INIS)

    Pina, C.M. de.

    1981-01-01

    The utilization of Quabox/Cubox computer codes for calculating Angra 1 reactor core is studied. The results shows a dependency between the spent CPU time and the curacy of thermal power distribution in function of the polinomial expansion used. Comparison were mode between Citation code and some results from Westinghouse [pt

  20. VMD-SS: A graphical user interface plug-in to calculate the protein secondary structure in VMD program.

    Science.gov (United States)

    Yahyavi, Masoumeh; Falsafi-Zadeh, Sajad; Karimi, Zahra; Kalatarian, Giti; Galehdari, Hamid

    2014-01-01

    The investigation on the types of secondary structure (SS) of a protein is important. The evolution of secondary structures during molecular dynamics simulations is a useful parameter to analyze protein structures. Therefore, it is of interest to describe VMD-SS (a software program) for the identification of secondary structure elements and its trajectories during simulation for known structures available at the Protein Data Bank (PDB). The program helps to calculate (1) percentage SS, (2) SS occurrence in each residue, (3) percentage SS during simulation, and (4) percentage residues in all SS types during simulation. The VMD-SS plug-in was designed using TCL script and stride to calculate secondary structure features. The database is available for free at http://science.scu.ac.ir/HomePage.aspx?TabID=13755.

  1. CAL3JHH: a Java program to calculate the vicinal coupling constants (3J H,H) of organic molecules.

    Science.gov (United States)

    Aguirre-Valderrama, Alonso; Dobado, José A

    2008-12-01

    Here, we present a free web-accessible application, developed in the JAVA programming language for the calculation of vicinal coupling constant (3J(H,H)) of organic molecules with the H-Csp3-Csp3-H fragment. This JAVA applet is oriented to assist chemists in structural and conformational analyses, allowing the user to calculate the averaged 3J(H,H) values among conformers, according to its Boltzmann populations. Thus, the CAL3JHH program uses the Haasnoot-Leeuw-Altona equation, and, by reading the molecule geometry from a protein data bank (PDB) file format or from multiple pdb files, automatically detects all the coupled hydrogens, evaluating the data needed for this equation. Moreover, a "Graphical viewer" menu allows the display of the results on the 3D molecule structure, as well as the plotting of the Newman projection for the couplings.

  2. SPLET - A program for calculating the space-lethargy distribution of epithermal neutrons in a reactor lattice cell

    International Nuclear Information System (INIS)

    Matausek, M.V.; Zmijatevic, I.

    1981-01-01

    A procedure to solve the space-single-lethargy dependent transport equation for epithermal neutrons in a cylindricised multi-region reactor lattice cell has been developed and proposed in the earlier papers. Here, the computational algorithm is comprised and the computing program SPLET, which calculates the space-lethargy distribution of the spherical harmonics neutron flux moments, as well as the related integral quantities as reaction rates and resonance integrals, is described. (author)

  3. PCXMC. A PC-based Monte Carlo program for calculating patient doses in medical x-ray examinations

    International Nuclear Information System (INIS)

    Tapiovaara, M.; Lakkisto, M.; Servomaa, A.

    1997-02-01

    The report describes PCXMC, a Monte Carlo program for calculating patients' organ doses and the effective dose in medical x-ray examinations. The organs considered are: the active bone marrow, adrenals, brain, breasts, colon (upper and lower large intestine), gall bladder, heats, kidneys, liver, lungs, muscle, oesophagus, ovaries, pancreas, skeleton, skin, small intestine, spleen, stomach, testes, thymes, thyroid, urinary bladder, and uterus. (42 refs.)

  4. ORBITALES. A program for the calculation of wave functions with an analytical central potential; ORBITALES. Programa de calculo de Funciones de Onda para una Potencial Central Analitico

    Energy Technology Data Exchange (ETDEWEB)

    Carretero, Yunta; Rodriguez Mayquez, E

    1974-07-01

    In this paper is described the objective, basis, carrying out in FORTRAN language and use of the program ORBITALES. This program calculate atomic wave function in the case of ths analytical central potential (Author) 8 refs.

  5. The Computer Program LIAR for Beam Dynamics Calculations in Linear Accelerators

    International Nuclear Information System (INIS)

    Assmann, R.W.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.H.; Thompson, K.

    2011-01-01

    Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. We present a new program LIAR ('LInear Accelerator Research code') that includes wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. The program is available for UNIX workstations and Windows PC's. It can be applied to a broad range of accelerators. We present examples of simulations for SLC and NLC.

  6. Sample collection: an overview of the Hydrogeochemical and Stream Sediment Reconnaissance Program

    International Nuclear Information System (INIS)

    Bolivar, S.L.

    1979-01-01

    A Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) for uranium is currently being conducted throughout the conterminous United States and Alaska. The HSSR is part of the National Uranium Resource Evaluation sponsored by the US Department of Energy. This ambitious geochemical reconnaissance program is conducted by four national laboratories: Los Alamos Scientific Laboratory, Lawrence Livermore Laboratory, Oak Ridge Gaseous Diffusion Plant, and Savannah River Laboratory. The program is based on an extensive review of world literature, reconnaissance work done in other countries, and pilot studies conducted by each laboratory. Sample-collection methods and sample density are determined to optimize the probability of detecting potential uranium mineralization. To achieve this aim, each laboratory has developed independent standardized field collection procedures that are designed for its section of the country. Field parameters such as pH, conductivity, climate, geography, and geology are recorded at each site. Most samples are collected at densities of one sample site per 10 to 23 km 2 . The HSSR program has helped to improve existing hydrogeochemical reconnaissance exploration techniques. In addition to providing industry with data that may help to identify potential uranium districts and to extend known uranium provinces, the HSSR also provides multi-element analytical data, which can be used in water quality, soil, sediment, environmental, and base-metal exploration studies

  7. The Euler’s Graphical User Interface Spreadsheet Calculator for Solving Ordinary Differential Equations by Visual Basic for Application Programming

    Science.gov (United States)

    Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila

    2017-08-01

    In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.

  8. SCINFI II A program to calculate the standardization curve in liquid scintillation counting

    Energy Technology Data Exchange (ETDEWEB)

    Grau Carles, A.; Grau Malonda, A.

    1985-07-01

    A code, SCINFI II, written in BASIC, has been developed to compute the efficiency-quench standardization curve for any beta radionuclide. The free parameter method has been applied. The program requires the standardization curve for 3{sup H} and the polynomial or tabulated relating counting efficiency as figure of merit for both 3{sup H} and the problem radionuclide. The program is applied to the computation, of the counting efficiency for different values of quench when the problem is 14{sup C}. The results of four different computation methods are compared. (Author) 17 refs.

  9. SCINFI II A program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1985-01-01

    A code, SCINFI II, written in BASIC, has been developed to compute the efficiency-quench standardization curve for any beta radionuclide. The free parameter method has been applied. The program requires the standardization curve for 3 H and the polynomial or tabulated relating counting efficiency as figure of merit for both 3 H and the problem radionuclide. The program is applied to the computation, of the counting efficiency for different values of quench when the problem is 14 C . The results of four different computation methods are compared. (Author) 17 refs

  10. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  11. REGRES: A FORTRAN-77 program to calculate nonparametric and ``structural'' parametric solutions to bivariate regression equations

    Science.gov (United States)

    Rock, N. M. S.; Duffy, T. R.

    REGRES allows a range of regression equations to be calculated for paired sets of data values in which both variables are subject to error (i.e. neither is the "independent" variable). Nonparametric regressions, based on medians of all possible pairwise slopes and intercepts, are treated in detail. Estimated slopes and intercepts are output, along with confidence limits, Spearman and Kendall rank correlation coefficients. Outliers can be rejected with user-determined stringency. Parametric regressions can be calculated for any value of λ (the ratio of the variances of the random errors for y and x)—including: (1) major axis ( λ = 1); (2) reduced major axis ( λ = variance of y/variance of x); (3) Y on Xλ = infinity; or (4) X on Y ( λ = 0) solutions. Pearson linear correlation coefficients also are output. REGRES provides an alternative to conventional isochron assessment techniques where bivariate normal errors cannot be assumed, or weighting methods are inappropriate.

  12. Reassessment of calculation of effective dose equivalent for the CRCN-CO Environmental Radiological Monitoring Program

    International Nuclear Information System (INIS)

    Carneiro, L.B.; Dourado, M.A.; Barbosa, R.C.

    2017-01-01

    To reassess the calculations of the effective dose equivalent to obtain data of dosimetry and the accomplishment of the analysis comparing the data of several techniques that record doses of radiation originating from the cosmogenic and terrestrial contributions that make up the so-called background radiation. the basic information to be obtained is the contribution of the difference between the terrestrial dose equivalents, even the lowest concentration of primordial radionuclides, and that of the dose equivalent, deduced from TLD readings. (author)

  13. ANLECIS-1: Version of ANLECIS Program for Calculations with the Asymetric Rotational Model

    International Nuclear Information System (INIS)

    Lopez Mendez, R.; Garcia Moruarte, F.

    1986-01-01

    A new modified version of the ANLECIS Code is reported. This version allows to fit simultaneously the cross section of the direct process by the asymetric rotational model, and the cross section of the compound nucleus process by the Hauser-Feshbach formalism with the modern statistical corrections. The calculations based in this version show a dependence of the compound nucleus cross section with respect to the asymetric parameter γ. (author). 19 refs

  14. Program realization of mathematical model of kinematic calculation of flat lever mechanisms

    Directory of Open Access Journals (Sweden)

    M. A. Vasechkin

    2016-01-01

    Full Text Available Calculation of kinematic mechanisms is very time-consuming work. Due to the content of a large number of similar operations can be automated using computers. Forthis purpose, it is necessary to implement a software implementation ofthe mathematical model of calculation of kinematic mechanisms of the second class. In the article on Turbo Pascal presents the text module to library procedures all kinematic studies of planar lever mechanisms of the second class. The determination of the kinematic characteristics of the mechanism and the construction of its provisions, plans, plans, speeds and accelerations carried out on the example of the six-link mechanism. The beginning of the motionless coordinate system coincides with the axis of rotation of the crank AB. It is assumed that the known length of all links, the positions of all additional points of links and the coordinates of all kinematic pairs rack mechanism, i.e. this stage of work to determine the kinematics of the mechanism must be preceded by a stage of synthesis of mechanism (determining missing dimensions of links. Denote the coordinates of point C and considering that the analogues of velocities and accelerations of this point is 0 (stationary point, appeal to the procedure that computes the kinematics group the Assyrians (GA third. Specify kinematic parameters of point D, taking the beginning of the guide slide E at point C, the angle, the analogue of the angular velocity and the analogue of the angular acceleration of the guide is zero, knowing the length of the connecting rod DE and the length of link 5, refer to the procedure for the GA of the second kind. The use of library routines module of the kinematic calculation, makes it relatively simple to organize a simulation of the mechanism motion, to calculate the projection analogues of velocities and accelerations of all links of the mechanism, to build plans of the velocities and accelerations at each position of the mechanism.

  15. Reassessment of calculation of effective dose equivalent for the CRCN-CO Environmental Radiological Monitoring Program

    Energy Technology Data Exchange (ETDEWEB)

    Carneiro, L.B.; Dourado, M.A.; Barbosa, R.C., E-mail: research.photonics@gmail.com [Centro Regional de Ciências Nucleares do Centro-Oeste (CRCN-CO/CNEN-GO), Abadia de Goiás, GO (Brazil)

    2017-07-01

    To reassess the calculations of the effective dose equivalent to obtain data of dosimetry and the accomplishment of the analysis comparing the data of several techniques that record doses of radiation originating from the cosmogenic and terrestrial contributions that make up the so-called background radiation. the basic information to be obtained is the contribution of the difference between the terrestrial dose equivalents, even the lowest concentration of primordial radionuclides, and that of the dose equivalent, deduced from TLD readings. (author)

  16. Characterization program, management and isotopic inventory calculation, radiological and fuel thermal irradiated in nuclear power Cofrentes

    International Nuclear Information System (INIS)

    Albendea, M.; Diego, J. L. de; Urrea, M.

    2012-01-01

    Characterization is a very detailed and user-friendly program takes into account the history of irradiation individualized and real all the fuel, even taking into account the interim periods are periods of discharge and recharge cycles and which have not been used.

  17. Universal algorithms and programs for calculating the motion parameters in the two-body problem

    Science.gov (United States)

    Bakhshiyan, B. T.; Sukhanov, A. A.

    1979-01-01

    The algorithms and FORTRAN programs for computing positions and velocities, orbital elements and first and second partial derivatives in the two-body problem are presented. The algorithms are applicable for any value of eccentricity and are convenient for computing various navigation parameters.

  18. Calculation of quantities of interest in high energy physics using visual basic 3.0. The Flow Tensor Program

    International Nuclear Information System (INIS)

    Besliu, C.; Jipa, A.; Zaharia, R.

    1995-01-01

    The Flow Tensor is an important physical quantity in High Energy Physics. The program used for the calculation of the Flow Tensor has been a complex menu-driven application, to allow the selection of various triggers for the studied reaction, of the number of traces in each studied event, of the momentum cut values for the resulting particles of fragments in each event, etc. We realised all that requests using a very modern and powerful tool: Visual Basic 3.0. This programming system allows the realisation of Windows-like programs and has numerous facilities: OLE (Object Linking and Embedding), the possibility to create professional graphics, to work with databases, to create and compile Windows-like help files. All these advantages make the effort to learn Visual Basic 3.0 worthwhile. (author)

  19. Lunar and Meteorite Sample Education Disk Program - Space Rocks for Classrooms, Museums, Science Centers, and Libraries

    Science.gov (United States)

    Allen, Jaclyn; Luckey, M.; McInturff, B.; Huynh, P.; Tobola, K.; Loftin, L.

    2010-01-01

    NASA is eager for students and the public to experience lunar Apollo samples and meteorites first hand. Lunar rocks and soil, embedded in Lucite disks, are available for educators to use in their classrooms, museums, science centers, and public libraries for education activities and display. The sample education disks are valuable tools for engaging students in the exploration of the Solar System. Scientific research conducted on the Apollo rocks reveals the early history of our Earth-Moon system and meteorites reveal much of the history of the early solar system. The rocks help educators make the connections to this ancient history of our planet and solar system and the basic processes accretion, differentiation, impact and volcanism. With these samples, educators in museums, science centers, libraries, and classrooms can help students and the public understand the key questions pursued by many NASA planetary missions. The Office of the Curator at Johnson Space Center is in the process of reorganizing and renewing the Lunar and Meteorite Sample Education Disk Program to increase reach, security and accountability. The new program expands the reach of these exciting extraterrestrial rocks through increased access to training and educator borrowing. One of the expanded opportunities is that trained certified educators from science centers, museums, and libraries may now borrow the extraterrestrial rock samples. Previously the loan program was only open to classroom educators so the expansion will increase the public access to the samples and allow educators to make the critical connections to the exciting exploration missions taking place in our solar system. Each Lunar Disk contains three lunar rocks and three regolith soils embedded in Lucite. The anorthosite sample is a part of the magma ocean formed on the surface of Moon in the early melting period, the basalt is part of the extensive lunar mare lava flows, and the breccias sample is an important example of the

  20. Evaluation of ΔGsub(f) values for unstable compounds: a Fortran program for the calculation of ternary phase equilibria

    International Nuclear Information System (INIS)

    Throop, G.J.; Rogl, P.; Rudy, E.

    1978-01-01

    A Fortran IV program was set up for the calculation of phase equilibria and tieline distributions in ternary systems of the type: transition metal-transition metal-nonmetal (interstitial type of solid solutions). The method offers the possibility of determining the thermodynamic values for unstable compounds through their influence upon ternary phase equilibria. The variation of the free enthalpy of formation of ternary solid solutions is calculated as a function of nonmetal content, thus describing the actual curvature of the phase boundaries. The integral and partial molar free enthalpies of formation of binary nonstoichiometric compounds and of phase solutions are expressed as analytical functions of the nonmetal content within their homogeneity range. The coefficient of these analytical expressions are obtained by the use either of the Wagner-Schottky vacancy model or polynomials second order in composition (parabolic approach). The free energy of formation, ΔGsub(f) has been calculated for the systems Ti-C, Zr-C, and Ta-C. Calculations of the ternary phase equilibria yielded the values for ΔGsub(f) for the unstable compounds Ti 2 C at 1500 0 C and Zr 2 C at 1775 0 C of -22.3 and 22.7 kcal g atom metal respectively. These values were used for the calculation of isothermal sections within the ternary systems Ti-Ta-C (at 1500 0 C) and Zr-Ta-C (at 1775 0 C). The ideal case of ternary phase solutions is extended to regular solutions. (author)