WorldWideScience

Sample records for materials statistical calculators

  1. Nuclear material statistical accountancy system

    International Nuclear Information System (INIS)

    Argentest, F.; Casilli, T.; Franklin, M.

    1979-01-01

    The statistical accountancy system developed at JRC Ispra is refered as 'NUMSAS', ie Nuclear Material Statistical Accountancy System. The principal feature of NUMSAS is that in addition to an ordinary material balance calcultation, NUMSAS can calculate an estimate of the standard deviation of the measurement error accumulated in the material balance calculation. The purpose of the report is to describe in detail, the statistical model on wich the standard deviation calculation is based; the computational formula which is used by NUMSAS in calculating the standard deviation and the information about nuclear material measurements and the plant measurement system which are required as data for NUMSAS. The material balance records require processing and interpretation before the material balance calculation is begun. The material balance calculation is the last of four phases of data processing undertaken by NUMSAS. Each of these phases is implemented by a different computer program. The activities which are carried out in each phase can be summarised as follows; the pre-processing phase; the selection and up-date phase; the transformation phase, and the computation phase

  2. Implementation of the INSPECT software package for statistical calculation in nuclear material accountability

    International Nuclear Information System (INIS)

    Marzo, M.A.S.

    1986-01-01

    The INSPECT software package was developed in the Pacific Northwest Laboratory for statistical calculations in nuclear material accountability. The programs apply the inspection and evaluation methodology described in Part of the Safeguards Technical Manual. In this paper the implementation of INSPECT at the Safeguards Division of CNEN, and the main characteristics of INSPECT are described. The potential applications of INSPECT to the nuclear material accountability is presented. (Author) [pt

  3. Statistical calculation of hot channel factors

    International Nuclear Information System (INIS)

    Farhadi, K.

    2007-01-01

    It is a conventional practice in the design of nuclear reactors to introduce hot channel factors to allow for spatial variations of power generation and flow distribution. Consequently, it is not enough to be able to calculate the nominal temperature distributions of fuel element, cladding, coolant, and central fuel. Indeed, one must be able to calculate the probability that the imposed temperature or heat flux limits in the entire core is not exceeded. In this paper, statistical methods are used to calculate hot channel factors for a particular case of a heterogeneous, Material Testing Reactor (MTR) and compare the results obtained from different statistical methods. It is shown that among the statistical methods available, the semi-statistical method is the most reliable one

  4. Statistical Analysis Of Failure Strength Of Material Using Weibull Distribution

    International Nuclear Information System (INIS)

    Entin Hartini; Mike Susmikanti; Antonius Sitompul

    2008-01-01

    In evaluation of ceramic and glass materials strength a statistical approach is necessary Strength of ceramic and glass depend on its measure and size distribution of flaws in these material. The distribution of strength for ductile material is narrow and close to a Gaussian distribution while strength of brittle materials as ceramic and glass following Weibull distribution. The Weibull distribution is an indicator of the failure of material strength resulting from a distribution of flaw size. In this paper, cumulative probability of material strength to failure probability, cumulative probability of failure versus fracture stress and cumulative probability of reliability of material were calculated. Statistical criteria calculation supporting strength analysis of Silicon Nitride material were done utilizing MATLAB. (author)

  5. Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations

    International Nuclear Information System (INIS)

    De la Cruz, O. O. Galvan; Moreno-Jimenez, S.; Larraga-Gutierrez, J. M.; Celis-Lopez, M. A.

    2010-01-01

    In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of the high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.

  6. The application of statistical techniques to nuclear materials accountancy

    International Nuclear Information System (INIS)

    Annibal, P.S.; Roberts, P.D.

    1990-02-01

    Over the past decade much theoretical research has been carried out on the development of statistical methods for nuclear materials accountancy. In practice plant operation may differ substantially from the idealized models often cited. This paper demonstrates the importance of taking account of plant operation in applying the statistical techniques, to improve the accuracy of the estimates and the knowledge of the errors. The benefits are quantified either by theoretical calculation or by simulation. Two different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an accountancy tank is investigated. Secondly, a means of improving the knowledge of the 'Material Unaccounted For' (the difference between the inventory calculated from input/output data, and the measured inventory), using information about the plant measurement system, is developed and compared with existing general techniques. (author)

  7. Calculating statistical distributions from operator relations: The statistical distributions of various intermediate statistics

    International Nuclear Information System (INIS)

    Dai, Wu-Sheng; Xie, Mi

    2013-01-01

    In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete

  8. IMPORTANCE OF MATERIAL BALANCES AND THEIR STATISTICAL EVALUATION IN RUSSIAN MATERIAL, PROTECTION, CONTROL AND ACCOUNTING

    International Nuclear Information System (INIS)

    Fishbone, L.G.

    1999-01-01

    While substantial work has been performed in the Russian MPC and A Program, much more needs to be done at Russian nuclear facilities to complete four necessary steps. These are (1) periodically measuring the physical inventory of nuclear material, (2) continuously measuring the flows of nuclear material, (3) using the results to close the material balance, particularly at bulk processing facilities, and (4) statistically evaluating any apparent loss of nuclear material. The periodic closing of material balances provides an objective test of the facility's system of nuclear material protection, control and accounting. The statistical evaluation using the uncertainties associated with individual measurement systems involved in the calculation of the material balance provides a fair standard for concluding whether the apparent loss of nuclear material means a diversion or whether the facility's accounting system needs improvement. In particular, if unattractive flow material at a facility is not measured well, the accounting system cannot readily detect the loss of attractive material if the latter substantially derives from the former

  9. Statistical methods for nuclear material management

    International Nuclear Information System (INIS)

    Bowen, W.M.; Bennett, C.A.

    1988-12-01

    This book is intended as a reference manual of statistical methodology for nuclear material management practitioners. It describes statistical methods currently or potentially important in nuclear material management, explains the choice of methods for specific applications, and provides examples of practical applications to nuclear material management problems. Together with the accompanying training manual, which contains fully worked out problems keyed to each chapter, this book can also be used as a textbook for courses in statistical methods for nuclear material management. It should provide increased understanding and guidance to help improve the application of statistical methods to nuclear material management problems

  10. Statistical methods for nuclear material management

    Energy Technology Data Exchange (ETDEWEB)

    Bowen W.M.; Bennett, C.A. (eds.)

    1988-12-01

    This book is intended as a reference manual of statistical methodology for nuclear material management practitioners. It describes statistical methods currently or potentially important in nuclear material management, explains the choice of methods for specific applications, and provides examples of practical applications to nuclear material management problems. Together with the accompanying training manual, which contains fully worked out problems keyed to each chapter, this book can also be used as a textbook for courses in statistical methods for nuclear material management. It should provide increased understanding and guidance to help improve the application of statistical methods to nuclear material management problems.

  11. Statistical methods and materials characterisation

    International Nuclear Information System (INIS)

    Wallin, K.R.W.

    2010-01-01

    Statistics is a wide mathematical area, which covers a myriad of analysis and estimation options, some of which suit special cases better than others. A comprehensive coverage of the whole area of statistics would be an enormous effort and would also be outside the capabilities of this author. Therefore, this does not intend to be a textbook on statistical methods available for general data analysis and decision making. Instead it will highlight a certain special statistical case applicable to mechanical materials characterization. The methods presented here do not in any way rule out other statistical methods by which to analyze mechanical property material data. (orig.)

  12. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  13. Fracture criterion for brittle materials based on statistical cells of finite volume

    International Nuclear Information System (INIS)

    Cords, H.; Kleist, G.; Zimmermann, R.

    1986-06-01

    An analytical consideration of the Weibull Statistical Analysis of brittle materials established the necessity of including one additional material constant for a more comprehensive description of the failure behaviour. The Weibull analysis is restricted to infinitesimal volume elements in consequence of the differential calculus applied. It was found that infinitesimally small elements are in conflict with the basic statistical assumption and that the differential calculus is not needed in fact since nowadays most of the stress analyses are based on finite element calculations, and these are most suitable for a subsequent statistical analysis of strength. The size of a finite statistical cell has been introduced as the third material parameter. It should represent the minimum volume containing all statistical features of the material such as distribution of pores, flaws and grains. The new approach also contains a unique treatment of failure under multiaxial stresses. The quantity responsible for failure under multiaxial stresses is introduced as a modified strain energy. Sixteen different tensile specimens including CT-specimens have been investigated experimentally and analyzed with the probabilistic fracture criterion. As a result it can be stated that the failure rates of all types of specimens made from three different grades of graphite are predictable. The accuracy of the prediction is one standard deviation. (orig.) [de

  14. Application of nonparametric statistic method for DNBR limit calculation

    International Nuclear Information System (INIS)

    Dong Bo; Kuang Bo; Zhu Xuenong

    2013-01-01

    Background: Nonparametric statistical method is a kind of statistical inference method not depending on a certain distribution; it calculates the tolerance limits under certain probability level and confidence through sampling methods. The DNBR margin is one important parameter of NPP design, which presents the safety level of NPP. Purpose and Methods: This paper uses nonparametric statistical method basing on Wilks formula and VIPER-01 subchannel analysis code to calculate the DNBR design limits (DL) of 300 MW NPP (Nuclear Power Plant) during the complete loss of flow accident, simultaneously compared with the DL of DNBR through means of ITDP to get certain DNBR margin. Results: The results indicate that this method can gain 2.96% DNBR margin more than that obtained by ITDP methodology. Conclusions: Because of the reduction of the conservation during analysis process, the nonparametric statistical method can provide greater DNBR margin and the increase of DNBR margin is benefited for the upgrading of core refuel scheme. (authors)

  15. Radiation damage calculations for compound materials

    International Nuclear Information System (INIS)

    Greenwood, L.R.

    1989-01-01

    Displacement damage calculations can be performed for 40 elements in the energy range up to 20 MeV with the SPECTER computer code. A recent addition to the code, called SPECOMP, can intermix atomic recoil energy distributions for any four elements to calculate the proper displacement damage for compound materials. The calculations take advantage of the atomic recoil data in the SPECTER libraries, which were determined by the DISCS computer code, using evaluated neutron cross section and angular distribution data in ENDF/B-V. Resultant damage cross sections for any compound can be added to the SPECTER libraries for the routine calculation of displacements in any given neutron field. Users do not require access to neutron cross section files. Results are presented for a variety of fusion materials and a new ceramic superconductor material. Future plans and nuclear data needs are discussed. 11 refs., 6 figs., 1 tab

  16. Am/Cm Vitrification Process: Vitrification Material Balance Calculations

    International Nuclear Information System (INIS)

    Smith, F.G.

    2000-01-01

    This report documents material balance calculations for the Americium/Curium vitrification process and describes the basis used to make the calculations. The material balance calculations reported here start with the solution produced by the Am/Cm pretreatment process as described in ``Material Balance Calculations for Am/Cm Pretreatment Process (U)'', SRT-AMC-99-0178 [1]. Following pretreatment, small batches of the product will be further treated with an additional oxalic acid precipitation and washing. The precipitate from each batch will then be charged to the Am/Cm melter with glass cullet and vitrified to produce the final product. The material balance calculations in this report are designed to provide projected compositions of the melter glass and off-gas streams. Except for decanted supernate collected from precipitation and precipitate washing, the flowsheet neglects side streams such as acid washes of empty tanks that would go directly to waste. Complete listings of the results of the material balance calculations are provided in the Appendices to this report

  17. Statistical approach for calculating opacities of high-Z plasmas

    International Nuclear Information System (INIS)

    Nishikawa, Takeshi; Nakamura, Shinji; Takabe, Hideaki; Mima, Kunioki

    1992-01-01

    For simulating the X-ray radiation from laser produced high-Z plasma, an appropriate atomic modeling is necessary. Based on the average ion model, we have used a rather simple atomic model for opacity calculation in a hydrodynamic code and obtained a fairly good agreement with the experiment on the X-ray spectra from the laser-produced plasmas. We have investigated the accuracy of the atomic model used in the hydrodynamic code. It is found that transition energies of 4p-4d, 4d-4f, 4p-5d, 4d-5f and 4f-5g, which are important in laser produced high-Z plasma, can be given within an error of 15 % compared to the values by the Hartree-Fock-Slater (HFS) calculation and their oscillator strengths obtained by HFS calculation vary by a factor two according to the difference of charge state. We also propose a statistical method to carry out detail configuration accounting for electronic state by use of the population of bound electrons calculated with the average ion model. The statistical method is relatively simple and provides much improvement in calculating spectral opacities of line radiation, when we use the average ion model to determine electronic state. (author)

  18. Statistic method of research reactors maximum permissible power calculation

    International Nuclear Information System (INIS)

    Grosheva, N.A.; Kirsanov, G.A.; Konoplev, K.A.; Chmshkyan, D.V.

    1998-01-01

    The technique for calculating maximum permissible power of a research reactor at which the probability of the thermal-process accident does not exceed the specified value, is presented. The statistical method is used for the calculations. It is regarded that the determining function related to the reactor safety is the known function of the reactor power and many statistically independent values which list includes the reactor process parameters, geometrical characteristics of the reactor core and fuel elements, as well as random factors connected with the reactor specific features. Heat flux density or temperature is taken as a limiting factor. The program realization of the method discussed is briefly described. The results of calculating the PIK reactor margin coefficients for different probabilities of the thermal-process accident are considered as an example. It is shown that the probability of an accident with fuel element melting in hot zone is lower than 10 -8 1 per year for the reactor rated power [ru

  19. Statistical multistep direct and statistical multistep compound models for calculations of nuclear data for applications

    International Nuclear Information System (INIS)

    Seeliger, D.

    1993-01-01

    This contribution contains a brief presentation and comparison of the different Statistical Multistep Approaches, presently available for practical nuclear data calculations. (author). 46 refs, 5 figs

  20. Am/Cm Vitrification Process: Pretreatment Material Balance Calculations

    International Nuclear Information System (INIS)

    Smith, F.G.

    2001-01-01

    This report documents material balance calculations for the pretreatment steps required to prepare the Americium/Curium solution currently stored in Tank 17.1 in the F-Canyon for vitrification. The material balance uses the latest analysis of the tank contents to provide a best estimate calculation of the expected plant operations during the pretreatment process. The material balance calculations primarily follow the material that directly leads to melter feed. Except for vapor products of the denitration reactions and treatment of supernate from precipitation and precipitate washing, the flowsheet does not include side streams such as acid washes of the empty tanks that would go directly to waste. The calculation also neglects tank heels. This report consolidates previously reported results, corrects some errors found in the spreadsheet and provides a more detailed discussion of the calculation basis

  1. Statistical Inference for Porous Materials using Persistent Homology.

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Chul [Univ. of Georgia, Athens, GA (United States); Heath, Jason E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    We propose a porous materials analysis pipeline using persistent homology. We rst compute persistent homology of binarized 3D images of sampled material subvolumes. For each image we compute sets of homology intervals, which are represented as summary graphics called persistence diagrams. We convert persistence diagrams into image vectors in order to analyze the similarity of the homology of the material images using the mature tools for image analysis. Each image is treated as a vector and we compute its principal components to extract features. We t a statistical model using the loadings of principal components to estimate material porosity, permeability, anisotropy, and tortuosity. We also propose an adaptive version of the structural similarity index (SSIM), a similarity metric for images, as a measure to determine the statistical representative elementary volumes (sREV) for persistence homology. Thus we provide a capability for making a statistical inference of the uid ow and transport properties of porous materials based on their geometry and connectivity.

  2. GWAPower: a statistical power calculation software for genome-wide association studies with quantitative traits.

    Science.gov (United States)

    Feng, Sheng; Wang, Shengchu; Chen, Chia-Cheng; Lan, Lan

    2011-01-21

    In designing genome-wide association (GWA) studies it is important to calculate statistical power. General statistical power calculation procedures for quantitative measures often require information concerning summary statistics of distributions such as mean and variance. However, with genetic studies, the effect size of quantitative traits is traditionally expressed as heritability, a quantity defined as the amount of phenotypic variation in the population that can be ascribed to the genetic variants among individuals. Heritability is hard to transform into summary statistics. Therefore, general power calculation procedures cannot be used directly in GWA studies. The development of appropriate statistical methods and a user-friendly software package to address this problem would be welcomed. This paper presents GWAPower, a statistical software package of power calculation designed for GWA studies with quantitative traits, where genetic effect is defined as heritability. Based on several popular one-degree-of-freedom genetic models, this method avoids the need to specify the non-centrality parameter of the F-distribution under the alternative hypothesis. Therefore, it can use heritability information directly without approximation. In GWAPower, the power calculation can be easily adjusted for adding covariates and linkage disequilibrium information. An example is provided to illustrate GWAPower, followed by discussions. GWAPower is a user-friendly free software package for calculating statistical power based on heritability in GWA studies with quantitative traits. The software is freely available at: http://dl.dropbox.com/u/10502931/GWAPower.zip.

  3. First-principles calculations of novel materials

    Science.gov (United States)

    Sun, Jifeng

    Computational material simulation is becoming more and more important as a branch of material science. Depending on the scale of the systems, there are many simulation methods, i.e. first-principles calculation (or ab-initio), molecular dynamics, mesoscale methods and continuum methods. Among them, first-principles calculation, which involves density functional theory (DFT) and based on quantum mechanics, has become to be a reliable tool in condensed matter physics. DFT is a single-electron approximation in solving the many-body problems. Intrinsically speaking, both DFT and ab-initio belong to the first-principles calculation since the theoretical background of ab-initio is Hartree-Fock (HF) approximation and both are aimed at solving the Schrodinger equation of the many-body system using the self-consistent field (SCF) method and calculating the ground state properties. The difference is that DFT introduces parameters either from experiments or from other molecular dynamic (MD) calculations to approximate the expressions of the exchange-correlation terms. The exchange term is accurately calculated but the correlation term is neglected in HF. In this dissertation, DFT based first-principles calculations were performed for all the novel materials and interesting materials introduced. Specifically, the DFT theory together with the rationale behind related properties (e.g. electronic, optical, defect, thermoelectric, magnetic) are introduced in Chapter 2. Starting from Chapter 3 to Chapter 5, several representative materials were studied. In particular, a new semiconducting oxytelluride, Ba2TeO is studied in Chapter 3. Our calculations indicate a direct semiconducting character with a band gap value of 2.43 eV, which agrees well with the optical experiment (˜ 2.93 eV). Moreover, the optical and defects properties of Ba2TeO are also systematically investigated with a view to understanding its potential as an optoelectronic or transparent conducting material. We find

  4. Development of a simplified statistical methodology for nuclear fuel rod internal pressure calculation

    International Nuclear Information System (INIS)

    Kim, Kyu Tae; Kim, Oh Hwan

    1999-01-01

    A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP. The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable. (author). 11 refs., 6 figs., 2 tabs

  5. Virial-statistic method for calculation of atom and molecule energies

    International Nuclear Information System (INIS)

    Borisov, Yu.A.

    1977-01-01

    A virial-statistical method has been applied to the calculation of the atomization energies of the following molecules: Mo(CO) 6 , Cr(CO) 6 , Fe(CO) 5 , MnH(CO) 5 , CoH(CO) 4 , Ni(CO) 4 . The principles of this method are briefly presented. Calculation results are given for the individual contributions to the atomization energies together with the calculated and experimental atomization energies (D). For the Mo(CO) 6 complex Dsub(calc) = 1759 and Dsub(exp) = 1763 kcal/mole. Calculated and experimental combination heat values for carbonyl complexes are presented. These values are shown to be adequately consistent [ru

  6. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  7. Inclusion of temperature dependence of fission barriers in statistical model calculations

    International Nuclear Information System (INIS)

    Newton, J.O.; Popescu, D.G.; Leigh, J.R.

    1990-08-01

    The temperature dependence of fission barriers has been interpolated from the results of recent theoretical calculations and included in the statistical model code PACE2. It is shown that the inclusion of temperature dependence causes significant changes to the values of the statistical model parameters deduced from fits to experimental data. 21 refs., 2 figs

  8. STATISTICAL DISTRIBUTION PATTERNS IN MECHANICAL AND FATIGUE PROPERTIES OF METALLIC MATERIALS

    OpenAIRE

    Tatsuo, SAKAI; Masaki, NAKAJIMA; Keiro, TOKAJI; Norihiko, HASEGAWA; Department of Mechanical Engineering, Ritsumeikan University; Department of Mechanical Engineering, Toyota College of Technology; Department of Mechanical Engineering, Gifu University; Department of Mechanical Engineering, Gifu University

    1997-01-01

    Many papers on the statistical aspect of materials strength have been collected and reviewed by The Research Group for Statistical Aspects of Materials Strength.A book of "Statistical Aspects of Materials Strength" was written by this group, and published in 1992.Based on the experimental data compiled in this book, distribution patterns of mechanical properties are systematically surveyed paying an attention to metallic materials.Thus one can obtain the fundamental knowledge for a reliabilit...

  9. Radiation damage calculations for compound materials

    International Nuclear Information System (INIS)

    Greenwood, L.R.

    1990-01-01

    This paper reports on the SPECOMP computer code, developed to calculate neutron-induced displacement damage cross sections for compound materials such as alloys, insulators, and ceramic tritium breeders for fusion reactors. These new calculations rely on recoil atom energy distributions previously computed with the DISCS computer code, the results of which are stored in SPECTER computer code master libraries. All reaction channels were considered in the DISCS calculations and the neutron cross sections were taken from ENDF/B-V. Compound damage calculations with SPECOMP thus do not need to perform any recoil atom calculations and consequently need no access to ENDF or other neutron data bases. The calculations proceed by determining secondary displacements for each combination of recoil atom and matrix atom using the Lindhard partition of the recoil energy to establish the damage energy

  10. Statistical theory for calculating energy spectra of β-delayed neutrons

    International Nuclear Information System (INIS)

    Kawano, Toshihiko; Moeller, Peter; Wilson, William B.

    2008-01-01

    Theoretical β-delayed neutron spectra are calculated based on the Quasi-particle Random Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after β-decay to the granddaughter residual are more accurately calculated than previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra reasonably agree with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors. (authors)

  11. Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Garcia-Herranz, Nuria; Cabellos, Oscar; Sanz, Javier; Juan, Jesus; Kuijper, Jim C.

    2008-01-01

    Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files

  12. Propagation of statistical and nuclear data uncertainties in Monte Carlo burn-up calculations

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Herranz, Nuria [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain)], E-mail: nuria@din.upm.es; Cabellos, Oscar [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, UPM (Spain); Sanz, Javier [Departamento de Ingenieria Energetica, Universidad Nacional de Educacion a Distancia, UNED (Spain); Juan, Jesus [Laboratorio de Estadistica, Universidad Politecnica de Madrid, UPM (Spain); Kuijper, Jim C. [NRG - Fuels, Actinides and Isotopes Group, Petten (Netherlands)

    2008-04-15

    Two methodologies to propagate the uncertainties on the nuclide inventory in combined Monte Carlo-spectrum and burn-up calculations are presented, based on sensitivity/uncertainty and random sampling techniques (uncertainty Monte Carlo method). Both enable the assessment of the impact of uncertainties in the nuclear data as well as uncertainties due to the statistical nature of the Monte Carlo neutron transport calculation. The methodologies are implemented in our MCNP-ACAB system, which combines the neutron transport code MCNP-4C and the inventory code ACAB. A high burn-up benchmark problem is used to test the MCNP-ACAB performance in inventory predictions, with no uncertainties. A good agreement is found with the results of other participants. This benchmark problem is also used to assess the impact of nuclear data uncertainties and statistical flux errors in high burn-up applications. A detailed calculation is performed to evaluate the effect of cross-section uncertainties in the inventory prediction, taking into account the temporal evolution of the neutron flux level and spectrum. Very large uncertainties are found at the unusually high burn-up of this exercise (800 MWd/kgHM). To compare the impact of the statistical errors in the calculated flux with respect to the cross uncertainties, a simplified problem is considered, taking a constant neutron flux level and spectrum. It is shown that, provided that the flux statistical deviations in the Monte Carlo transport calculation do not exceed a given value, the effect of the flux errors in the calculated isotopic inventory are negligible (even at very high burn-up) compared to the effect of the large cross-section uncertainties available at present in the data files.

  13. Ab initio calculations of cross luminescence materials

    International Nuclear Information System (INIS)

    Kanchana, V.

    2016-01-01

    Abintio calculations have been performed to study the structural, electronic, and optical properties of ABX 3 (A=alkali, B=alkaline-earth, and X=halide) compounds. The ground state properties are calculated using the pseudopotential method with the inclusion of van der Waals interaction, which we find inevitable in reproducing the experimental structure properties in alkali iodides because of its layered structure. All calculations were performed using the Full-Potential Linearized Augmented Plane Wave method. The band structures are plotted with various functionals and we find the newly developed Tran and Blaha modified Becke-Johnson potential to improve the band gap significantly. The optical properties such as complex dielectric function, refractive index, and absorption spectra are calculated which clearly reveal the optically isotropic nature of these materials though being structurally anisotropic, which is the key requirement for ceramic scintillators. Cross luminescence materials are very interesting because of its fast decay. One of the major criteria for the cross luminescence to happen is the energy difference between valence band and next deeper core valence band being lesser when compared to energy gap of the compound, so that radiative electronic transition may occur between valence band and core valence band. We found this criteria to be satisfied in all the studied compounds leading to cross luminescence except for KSrI 3 , RbSrI 3 . The present study suggest that among the six compounds studied, CsSrI 3 , CsMgCl 3 , CsCaCl 3 , and CsSrCl 3 compounds are cross luminescence materials, which is well explained from the band structure, optical properties calculations. Chlorides are better scintillators that iodides and CsMgCl 3 is found to be promising one among the studied compounds. Apart from these materials we have also discussed electronic structure and optical properties of other scintillator compounds. (author)

  14. Development of a computer model using the EGS4 simulation code to calculate scattered X-rays through some materials

    International Nuclear Information System (INIS)

    Al-Ghorabie, F.H.H.

    2003-01-01

    In this paper a computer model based on the use of the well-known Monte Carlo simulation code EGS4 was developed to simulate the scattering of polyenergetic X-ray beams through some materials. These materials are: lucite, polyethylene, polypropylene and aluminium. In particular, the ratio of the scattered to total X-ray fluence (scatter fraction) has been calculated for X-ray beams in the energy region 30-120 keV. In addition scatter fractions have been determined experimentally using a polyenergetic superficial X-ray unit. Comparison of the measured and the calculated results has been performed. The Monte Carlo calculations have also been carried out for water, bakelite and bone to examine the dependence of scatter fraction on the density of the scatterer. Good agreement (estimated statistical error < 5%) was obtained between the measured and the calculated values of the scatter fractions for materials with Z < 20 that were studied in this paper. Copyright (2003) Australasian College of Physical Scientists and Engineers in Medicine

  15. Virtual materials design using databases of calculated materials properties

    International Nuclear Information System (INIS)

    Munter, T R; Landis, D D; Abild-Pedersen, F; Jones, G; Wang, S; Bligaard, T

    2009-01-01

    Materials design is most commonly carried out by experimental trial and error techniques. Current trends indicate that the increased complexity of newly developed materials, the exponential growth of the available computational power, and the constantly improving algorithms for solving the electronic structure problem, will continue to increase the relative importance of computational methods in the design of new materials. One possibility for utilizing electronic structure theory in the design of new materials is to create large databases of materials properties, and subsequently screen these for new potential candidates satisfying given design criteria. We utilize a database of more than 81 000 electronic structure calculations. This alloy database is combined with other published materials properties to form the foundation of a virtual materials design framework (VMDF). The VMDF offers a flexible collection of materials databases, filters, analysis tools and visualization methods, which are particularly useful in the design of new functional materials and surface structures. The applicability of the VMDF is illustrated by two examples. One is the determination of the Pareto-optimal set of binary alloy methanation catalysts with respect to catalytic activity and alloy stability; the other is the search for new alloy mercury absorbers.

  16. The first principle calculation of two-dimensional Dirac materials

    Science.gov (United States)

    Lu, Jin

    2017-12-01

    As the size of integrated device becoming increasingly small, from the last century, semiconductor industry is facing the enormous challenge to break the Moore’s law. The development of calculation, communication and automatic control have emergent expectation of new materials at the aspect of semiconductor industrial technology and science. In spite of silicon device, searching the alternative material with outstanding electronic properties has always been a research point. As the discovery of graphene, the research of two-dimensional Dirac material starts to express new vitality. This essay studied the development calculation of 2D material’s mobility and introduce some detailed information of some approximation method of the first principle calculation.

  17. Automated Material Accounting Statistics System at Rockwell Hanford Operations

    International Nuclear Information System (INIS)

    Eggers, R.F.; Giese, E.W.; Kodman, G.P.

    1986-01-01

    The Automated Material Accounting Statistics System (AMASS) was developed under the sponsorship of the U.S. Nuclear Regulatory Commission. The AMASS was developed when it was realized that classical methods of error propagation, based only on measured quantities, did not properly control false alarm rate and that errors other than measurement errors affect inventory differences. The classical assumptions that (1) the mean value of the inventory difference (ID) for a particular nuclear material processing facility is zero, and (2) the variance of the inventory difference is due only to errors in measured quantities are overly simplistic. The AMASS provides a valuable statistical tool for estimating the true mean value and variance of the ID data produced by a particular material balance area. In addition it provides statistical methods of testing both individual and cumulative sums of IDs, taking into account the estimated mean value and total observed variance of the ID

  18. Image Statistics and the Representation of Material Properties in the Visual Cortex.

    Science.gov (United States)

    Baumgartner, Elisabeth; Gegenfurtner, Karl R

    2016-01-01

    We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images.

  19. Reversible Statistics

    DEFF Research Database (Denmark)

    Tryggestad, Kjell

    2004-01-01

    The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

  20. Review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials

    International Nuclear Information System (INIS)

    Roth, D.J.; Swickard, S.M.; Stang, D.B.; Deguire, M.R.

    1990-03-01

    A review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials is presented. Initially, a semi-empirical model is developed showing the origin of the linear relationship between ultrasonic velocity and porosity fraction. Then, from a compilation of data produced by many researchers, scatter plots of velocity versus percent porosity data are shown for Al2O3, MgO, porcelain-based ceramics, PZT, SiC, Si3N4, steel, tungsten, UO2,(U0.30Pu0.70)C, and YBa2Cu3O(7-x). Linear regression analysis produced predicted slope, intercept, correlation coefficient, level of significance, and confidence interval statistics for the data. Velocity values predicted from regression analysis for fully-dense materials are in good agreement with those calculated from elastic properties

  1. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  2. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  3. Molecular dynamics and Monte Carlo calculations in statistical mechanics

    International Nuclear Information System (INIS)

    Wood, W.W.; Erpenbeck, J.J.

    1976-01-01

    Monte Carlo and molecular dynamics calculations on statistical mechanical systems is reviewed giving some of the more significant recent developments. It is noted that the term molecular dynamics refers to the time-averaging technique for hard-core and square-well interactions and for continuous force-law interactions. Ergodic questions, methodology, quantum mechanical, Lorentz, and one-dimensional, hard-core, and square and triangular-well systems, short-range soft potentials, and other systems are included. 268 references

  4. Application of nonparametric statistics to material strength/reliability assessment

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-01-01

    An advanced material technology requires data base on a wide variety of material behavior which need to be established experimentally. It may often happen that experiments are practically limited in terms of reproducibility or a range of test parameters. Statistical methods can be applied to understanding uncertainties in such a quantitative manner as required from the reliability point of view. Statistical assessment involves determinations of a most probable value and the maximum and/or minimum value as one-sided or two-sided confidence limit. A scatter of test data can be approximated by a theoretical distribution only if the goodness of fit satisfies a test criterion. Alternatively, nonparametric statistics (NPS) or distribution-free statistics can be applied. Mathematical procedures by NPS are well established for dealing with most reliability problems. They handle only order statistics of a sample. Mathematical formulas and some applications to engineering assessments are described. They include confidence limits of median, population coverage of sample, required minimum number of a sample, and confidence limits of fracture probability. These applications demonstrate that a nonparametric statistical estimation is useful in logical decision making in the case a large uncertainty exists. (author)

  5. Some calculations of the failure statistics of coated fuel particles

    International Nuclear Information System (INIS)

    Martin, D.G.; Hobbs, J.E.

    1977-03-01

    Statistical variations of coated fuel particle parameters were considered in stress model calculations and the resulting particle failure fraction versus burn-up evaluated. Variations in the following parameters were considered simultaneously: kernel diameter and porosity, thickness of the buffer, seal, silicon carbide and inner and outer pyrocarbon layers, which were all assumed to be normally distributed, and the silicon carbide fracture stress which was assumed to follow a Weibull distribution. Two methods, based respectively on random sampling and convolution of the variations were employed and applied to particles manufactured by Dragon Project and RFL Springfields. Convolution calculations proved the more satisfactory. In the present calculations variations in the silicon carbide fracture stress caused the greatest spread in burn-up for a given change in failure fraction; kernel porosity is the next most important parameter. (author)

  6. Subcritical calculation of the nuclear material warehouse

    International Nuclear Information System (INIS)

    Garcia M, T.; Mazon R, R.

    2009-01-01

    In this work the subcritical calculation of the nuclear material warehouse of the Reactor TRIGA Mark III labyrinth in the Mexico Nuclear Center is presented. During the adaptation of the nuclear warehouse (vault I), the fuel was temporarily changed to the warehouse (vault II) and it was also carried out the subcritical calculation for this temporary arrangement. The code used for the calculation of the effective multiplication factor, it was the Monte Carlo N-Particle Extended code known as MCNPX, developed by the National Laboratory of Los Alamos, for the particles transport. (Author)

  7. Calculations on neutron irradiation damage in reactor materials

    International Nuclear Information System (INIS)

    Sone, Kazuho; Shiraishi, Kensuke

    1976-01-01

    Neutron irradiation damage calculations were made for Mo, Nb, V, Fe, Ni and Cr. Firstly, damage functions were calculated as a function of neutron energy with neutron cross sections of elastic and inelastic scatterings, and (n,2n) and (n,γ) reactions filed in ENDF/B-III. Secondly, displacement damage expressed in displacements per atom (DPA) was estimated for neutron environments such as fission spectrum, thermal neutron reactor (JMTR), fast breeder reactor (MONJU) and two fusion reactors (The Conceptual Design of Fusion Reactor in JAERI and ORNL-Benchmark). then, damage cross section in units of dpa. barn was defined as a factor to convert a given neutron fluence to the DPA value, and was calculated for the materials in the above neutron environments. Finally, production rates of helium and hydrogen atoms were calculated with (n,α) and (n,p) cross sections in ENDF/B-III for the materials irradiated in the above reactors. (auth.)

  8. Automated material accounting statistics system (AMASS)

    International Nuclear Information System (INIS)

    Messinger, M.; Lumb, R.F.; Tingey, F.H.

    1981-01-01

    In this paper the modeling and statistical analysis of measurement and process data for nuclear material accountability is readdressed under a more general framework than that provided in the literature. The result of this effort is a computer program (AMASS) which uses the algorithms and equations of this paper to accomplish the analyses indicated. The actual application of the method to process data is emphasized

  9. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Statistics of ductile fracture surfaces: the effect of material parameters

    DEFF Research Database (Denmark)

    Ponson, Laurent; Cao, Yuanyuan; Bouchaud, Elisabeth

    2013-01-01

    distributed. The three dimensional analysis permits modeling of a three dimensional material microstructure and of the resulting three dimensional stress and deformation states that develop in the fracture process region. Material parameters characterizing void nucleation are varied and the statistics...... of the resulting fracture surfaces is investigated. All the fracture surfaces are found to be self-affine over a size range of about two orders of magnitude with a very similar roughness exponent of 0.56 ± 0.03. In contrast, the full statistics of the fracture surfaces is found to be more sensitive to the material...

  11. Numerical consideration for multiscale statistical process control method applied to nuclear material accountancy

    International Nuclear Information System (INIS)

    Suzuki, Mitsutoshi; Hori, Masato; Asou, Ryoji; Usuda, Shigekazu

    2006-01-01

    The multiscale statistical process control (MSSPC) method is applied to clarify the elements of material unaccounted for (MUF) in large scale reprocessing plants using numerical calculations. Continuous wavelet functions are used to decompose the process data, which simulate batch operation superimposed by various types of disturbance, and the disturbance components included in the data are divided into time and frequency spaces. The diagnosis of MSSPC is applied to distinguish abnormal events from the process data and shows how to detect abrupt and protracted diversions using principle component analysis. Quantitative performance of MSSPC for the time series data is shown with average run lengths given by Monte-Carlo simulation to compare to the non-detection probability β. Recent discussion about bias corrections in material balances is introduced and another approach is presented to evaluate MUF without assuming the measurement error model. (author)

  12. Statistical Analysis of Reactor Pressure Vessel Fluence Calculation Benchmark Data Using Multiple Regression Techniques

    International Nuclear Information System (INIS)

    Carew, John F.; Finch, Stephen J.; Lois, Lambros

    2003-01-01

    The calculated >1-MeV pressure vessel fluence is used to determine the fracture toughness and integrity of the reactor pressure vessel. It is therefore of the utmost importance to ensure that the fluence prediction is accurate and unbiased. In practice, this assurance is provided by comparing the predictions of the calculational methodology with an extensive set of accurate benchmarks. A benchmarking database is used to provide an estimate of the overall average measurement-to-calculation (M/C) bias in the calculations ( ). This average is used as an ad-hoc multiplicative adjustment to the calculations to correct for the observed calculational bias. However, this average only provides a well-defined and valid adjustment of the fluence if the M/C data are homogeneous; i.e., the data are statistically independent and there is no correlation between subsets of M/C data.Typically, the identification of correlations between the errors in the database M/C values is difficult because the correlation is of the same magnitude as the random errors in the M/C data and varies substantially over the database. In this paper, an evaluation of a reactor dosimetry benchmark database is performed to determine the statistical validity of the adjustment to the calculated pressure vessel fluence. Physical mechanisms that could potentially introduce a correlation between the subsets of M/C ratios are identified and included in a multiple regression analysis of the M/C data. Rigorous statistical criteria are used to evaluate the homogeneity of the M/C data and determine the validity of the adjustment.For the database evaluated, the M/C data are found to be strongly correlated with dosimeter response threshold energy and dosimeter location (e.g., cavity versus in-vessel). It is shown that because of the inhomogeneity in the M/C data, for this database, the benchmark data do not provide a valid basis for adjusting the pressure vessel fluence.The statistical criteria and methods employed in

  13. Calculation of atom displacement cross section for structure material

    International Nuclear Information System (INIS)

    Liu Ping; Xu Yiping

    2015-01-01

    The neutron radiation damage in material is an important consideration of the reactor design. The radiation damage of materials mainly comes from atom displacements of crystal structure materials. The reaction cross sections of charged particles, cross sections of displacements per atom (DPA) and KERMA are the basis of radiation damage calculation. In order to study the differences of DPA cross sections with different codes and different evaluated nuclear data libraries, the DPA cross sections for structure materials were calculated with UNF and NJOY codes, and the comparisons of results were given. The DPA cross sections from different evaluated nuclear data libraries were compared. And the comparison of DPA cross sections between NJOY and Monte Carlo codes was also done. The results show that the differences among these evaluated nuclear data libraries exist. (authors)

  14. Feasibility of real-time calculation of correlation integral derived statistics applied to EEG time series

    NARCIS (Netherlands)

    Broek, P.L.C. van den; Egmond, J. van; Rijn, C.M. van; Takens, F.; Coenen, A.M.L.; Booij, L.H.D.J.

    2005-01-01

    This study assessed the feasibility of online calculation of the correlation integral (C(r)) aiming to apply C(r)-derived statistics. For real-time application it is important to reduce calculation time. It is shown how our method works for EEG time series. Methods: To achieve online calculation of

  15. Statistics of foreign trade in radioactive materials

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The German Federal Office for Industry and Foreign Trade Control (BAFA) keeps annual statistics of the imports and exports of radioactive materials, nuclear fuels included. The entries, some of them with precise details, cover the participating countries and the radionuclides concerned as well as all kinds of radioactive materials. The tables listed in the article represent the overall balance of the development of imports and exports of radioactive materials for the years 1983 to 2000 arranged by activity levels, including the development of nuclear fuel imports and exports. For the year 2000, an additional trade balance for irradiated and unirradiated nuclear fuels and source materials differentiated by enrichment is presented for the countries involved. In 2000, some 2446 t of nuclear fuels and source materials were imported into the Federal Republic, while approx. 2720 t were exported. The chief trading partners are countries of the European Union and Russia, South Korea, and Brazil. (orig.) [de

  16. Statistics for lawyers

    CERN Document Server

    Finkelstein, Michael O

    2015-01-01

    This classic text, first published in 1990, is designed to introduce law students, law teachers, practitioners, and judges to the basic ideas of mathematical probability and statistics as they have been applied in the law. The third edition includes over twenty new sections, including the addition of timely topics, like New York City police stops, exonerations in death-sentence cases, projecting airline costs, and new material on various statistical techniques such as the randomized response survey technique, rare-events meta-analysis, competing risks, and negative binomial regression. The book consists of sections of exposition followed by real-world cases and case studies in which statistical data have played a role. The reader is asked to apply the theory to the facts, to calculate results (a hand calculator is sufficient), and to explore legal issues raised by quantitative findings. The authors' calculations and comments are given in the back of the book. As with previous editions, the cases and case stu...

  17. Calculation of the dynamic air flow resistivity of fibre materials

    DEFF Research Database (Denmark)

    Tarnow, Viggo

    1997-01-01

    The acoustic attenuation of acoustic fiber materials is mainly determined by the dynamic resistivity to an oscillating air flow. The dynamic resistance is calculated for a model with geometry close to the geometry of real fibre material. The model constists of parallel cylinders placed randomly.......The second procedure is an extension to oscillating air flow of the Brinkman self-consistent procedure for dc flow. The procedures are valid for volume concentrations of cylinders less than 0.1. The calculations show that for the density of fibers of interest for acoustic fibre materials the simple self...

  18. Calculation of radiation dose rate arisen from radionuclide contained in building materials

    International Nuclear Information System (INIS)

    Lai Tien Thinh; Nguyen Hao Quang

    2008-01-01

    This paper presents some results that we used MCNP5 program to calculate radiation dose rate arisen from radionuclide in building materials. Since then, the limits of radionuclide content in building materials are discussed. The calculation results by MCNP are compared with those calculated by analytical method. (author)

  19. MATERIAL COMPOSITIONS AND NUMBER DENSITIES FOR NEUTRONICS CALCULATIONS

    International Nuclear Information System (INIS)

    D. A. Thomas

    1996-01-01

    The purpose of this analysis is to calculate the number densities and isotopic weight percentages of the standard materials to be used in the neutronics (criticality and radiation shielding) evaluations by the Waste Package Development Department. The objective of this analysis is to provide material number density information which can be referenced by future neutronics design analyses, such as for those supporting the Conceptual Design Report

  20. Statistical methods in nuclear material accountancy: Past, present and future

    International Nuclear Information System (INIS)

    Pike, D.J.; Woods, A.J.

    1983-01-01

    The analysis of nuclear material inventory data is motivated by the desire to detect any loss or diversion of nuclear material, insofar as such detection may be feasible by statistical analysis of repeated inventory and throughput measurements. The early regulations, which laid down the specifications for the analysis of inventory data, were framed without acknowledging the essentially sequential nature of the data. It is the broad aim of this paper to discuss the historical nature of statistical analysis of inventory data including an evaluation of why statistical methods should be required at all. If it is accepted that statistical techniques are required, then two main areas require extensive discussion. First, it is important to assess the extent to which stated safeguards aims can be met in practice. Second, there is a vital need for reassessment of the statistical techniques which have been proposed for use in nuclear material accountancy. Part of this reassessment must involve a reconciliation of the apparent differences in philosophy shown by statisticians; but, in addition, the techniques themselves need comparative study to see to what extent they are capable of meeting realistic safeguards aims. This paper contains a brief review of techniques with an attempt to compare and contrast the approaches. It will be suggested that much current research is following closely similar lines, and that national and international bodies should encourage collaborative research and practical in-plant implementations. The techniques proposed require credibility and power; but at this point in time statisticians require credibility and a greater level of unanimity in their approach. A way ahead is proposed based on a clear specification of realistic safeguards aims, and a development of a unified statistical approach with encouragement for the performance of joint research. (author)

  1. Aspects of modern fracture statistics

    International Nuclear Information System (INIS)

    Tradinik, W.; Pabst, R.F.; Kromp, K.

    1981-01-01

    This contribution begins with introductory general remarks about fracture statistics. Then the fundamentals of the distribution of fracture probability are described. In the following part the application of the Weibull Statistics is justified. In the fourth chapter the microstructure of the material is considered in connection with calculations made in order to determine the fracture probability or risk of fracture. (RW) [de

  2. Effects of lattice parameters on piezoelectric constants in wurtzite materials: A theoretical study using first-principles and statistical-learning methods

    Science.gov (United States)

    Momida, Hiroyoshi; Oguchi, Tamio

    2018-04-01

    Longitudinal piezoelectric constant (e 33) values of wurtzite materials, which are listed in a structure database, are calculated and analyzed by using first-principles and statistical learning methods. It is theoretically shown that wurtzite materials with high e 33 generally have small lattice constant ratios (c/a) almost independent of constituent elements, and approximately expressed as e 33 ∝ c/a - (c/a)0 with ideal lattice constant ratio (c/a)0. This relation also holds for highly-piezoelectric ternary materials such as Sc x Al1- x N. We conducted a search for high-piezoelectric wurtzite materials by identifying materials with smaller c/a values. It is proposed that the piezoelectricity of ZnO can be significantly enhanced by substitutions of Zn with Ca.

  3. Feasibility of real-time calculation of correlation integral derived statistics applied to EGG time series

    NARCIS (Netherlands)

    van den Broek, PLC; van Egmond, J; van Rijn, CM; Takens, F; Coenen, AML; Booij, LHDJ

    2005-01-01

    Background: This study assessed the feasibility of online calculation of the correlation integral (C(r)) aiming to apply C(r)derived statistics. For real-time application it is important to reduce calculation time. It is shown how our method works for EEG time series. Methods: To achieve online

  4. Monte Carlo calculations of electron diffusion in materials

    International Nuclear Information System (INIS)

    Schroeder, U.G.

    1976-01-01

    By means of simulated experiments, various transport problems for 10 Mev electrons are investigated. For this purpose, a special Monte-Carlo programme is developed, and with this programme calculations are made for several material arrangements. (orig./LN) [de

  5. Statistics of foreign trade in radioactive materials 2004

    International Nuclear Information System (INIS)

    Anon.

    2006-01-01

    The German Federal Office for Industry and Foreign Trade Control (BAFA) keeps annual statistics of the imports and exports of radioactive materials, nuclear fuels included. The entries, some of them with precise details, cover the participating countries and the radionuclides concerned as well as all kinds of radioactive materials. The tables listed in the article represent the overall balance of the development of imports and exports of radioactive materials for the years 1986 to 2004 arranged by activity levels, including the development of nuclear fuel imports and exports. For the year 2004, an additional trade balance for irradiated and unirradiated nuclear fuels and source materials differentiated by enrichment is presented for the countries involved. In 2004, some 2,558 t of nuclear fuels and source materials were imported into the Federal Republic, while approx. 1,971 t were exported. The chief trading partners are countries of the European Union, Canada, Russia and the USA. (orig.)

  6. Statistics of foreign trade in radioactive materials 2002

    International Nuclear Information System (INIS)

    Anon.

    2003-01-01

    The German Federal Office for Industry and Foreign Trade Control (BAFA) keeps annual statistics of the imports and exports of radioactive materials, nuclear fuels included. The entries, some of them with precise details, cover the participating countries and the radionuclides concerned as well as all kinds of radioactive materials. The tables listed in the article represent the overall balance of the development of imports and exports of radioactive materials for the years 1983 to 2002 arranged by activity levels, including the development of nuclear fuel imports and exports. For the year 2002, an additional trade balance for irradiated and unirradiated nuclear fuels and source materials differentiated by enrichment is presented for the countries involved. In 2002, some 3 070 t of nuclear fuels and source materials were imported into the Federal Republic, while approx. 3 052 t were exported. The chief trading partners are countries of the European Union, Russia, and the USA. (orig.)

  7. New Light-Harvesting Materials Using Accurate and Efficient Bandgap Calculations

    DEFF Research Database (Denmark)

    Castelli, Ivano Eligio; Hüser, Falco; Pandey, Mohnish

    2014-01-01

    Electronic bandgap calculations are presented for 2400 experimentally known materials from the Materials Project database and the bandgaps, obtained with different types of functionals within density functional theory and (partial) self-consistent GW approximation, are compared for 20 randomly...

  8. Statistical study on the strength of structural materials and elements

    International Nuclear Information System (INIS)

    Blume, J.A.; Dalal, J.S.; Honda, K.K.

    1975-07-01

    Strength data for structural materials and elements including concrete, reinforcing steel, structural steel, plywood elements, reinforced concrete beams, reinforced concrete columns, brick masonry elements, and concrete masonry walls were statistically analyzed. Sample statistics were computed for these data, and distribution parameters were derived for normal, lognormal, and Weibull distributions. Goodness-of-fit tests were performed on these distributions. Most data, except those for masonry elements, displayed fairly small dispersion. Dispersion in data for structural materials was generally found to be smaller than for structural elements. Lognormal and Weibull distributions displayed better overall fits to data than normal distribution, although either Weibull or lognormal distribution can be used to represent the data analyzed. (auth)

  9. Statistical analysis and interpolation of compositional data in materials science.

    Science.gov (United States)

    Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

    2015-02-09

    Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

  10. A Statistical Learning Framework for Materials Science: Application to Elastic Moduli of k-nary Inorganic Polycrystalline Compounds.

    Science.gov (United States)

    de Jong, Maarten; Chen, Wei; Notestine, Randy; Persson, Kristin; Ceder, Gerbrand; Jain, Anubhav; Asta, Mark; Gamst, Anthony

    2016-10-03

    Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. The approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials.

  11. Statistical analysis and Kalman filtering applied to nuclear materials accountancy

    International Nuclear Information System (INIS)

    Annibal, P.S.

    1990-08-01

    Much theoretical research has been carried out on the development of statistical methods for nuclear material accountancy. In practice, physical, financial and time constraints mean that the techniques must be adapted to give an optimal performance in plant conditions. This thesis aims to bridge the gap between theory and practice, to show the benefits to be gained from a knowledge of the facility operation. Four different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an 'accountancy tank' is investigated. Secondly, an analysis of the calibration data for the same tank is presented, establishing bounds for the error and suggesting a means of reducing them. Thirdly, a plant-specific method of producing an optimal statistic from the input, output and inventory data, to help decide between 'material loss' and 'no loss' hypotheses, is developed and compared with existing general techniques. Finally, an application of the Kalman Filter to materials accountancy is developed, to demonstrate the advantages of state-estimation techniques. The results of the analyses and comparisons illustrate the importance of taking into account a complete and accurate knowledge of the plant operation, measurement system, and calibration methods, to derive meaningful results from statistical tests on materials accountancy data, and to give a better understanding of critical random and systematic error sources. The analyses were carried out on the head-end of the Fast Reactor Reprocessing Plant, where fuel from the prototype fast reactor is cut up and dissolved. However, the techniques described are general in their application. (author)

  12. Statistical model calculation of fission isomer excitation functions in (n,n') and (n,γ) reactions

    International Nuclear Information System (INIS)

    Chatterjee, A.; Athougies, A.L.; Mehta, M.K.

    1977-01-01

    A statistical model developed by Britt and others (1971, 1973) to analyze isomer excitation functions in spallation type reactions like (α,2n) has been adopted in fission isomer calculations for (n,n') and (n,γ) reactions. Calculations done for 235 U(n,n')sup(238m)U and 235 U(n,γ)sup(236m)U reactions have been compared with experimental measurements. A listing of the computer program ISOMER using FORTRAN IV to calculate the isomer to prompt ratios is given. (M.G.B.)

  13. Optimal systematics of single-humped fission barriers for statistical calculations

    International Nuclear Information System (INIS)

    Mashnik, S.G.

    1993-01-01

    A systematic comparison of the existing phenomenological approaches and models for describing single-humped fast-computing fission barriers are given. The experimental data on excitation energy dependence of the fissility of compound nuclei are analyzed in the framework of the statistical approach by using different models for fission barriers, shell and pairing corrections and level-density parameter in order to identify their reliability and region of applicability for Monte Carlo calculations of evaporative cascades. The energy dependence of fission cross-sections for reactions induced by intermediate energy protons has been analyzed in the framework of the cascade-exiton model. 53 refs., 15 figs., 3 tabs

  14. On the calculation of Lorenz numbers for complex thermoelectric materials

    Science.gov (United States)

    Wang, Xufeng; Askarpour, Vahid; Maassen, Jesse; Lundstrom, Mark

    2018-02-01

    A first-principles informed approach to the calculation of Lorenz numbers for complex thermoelectric materials is presented and discussed. Example calculations illustrate the importance of using accurate band structures and energy-dependent scattering times. Results obtained by assuming that the scattering rate follows the density-of-states show that in the non-degenerate limit, Lorenz numbers below the commonly assumed lower limit of 2 (kB /q ) 2 can occur. The physical cause of low Lorenz numbers is explained by the shape of the transport distribution. The numerical and physical issues that need to be addressed in order to produce accurate calculations of the Lorenz number are identified. The results of this study provide a general method that should contribute to the interpretation of measurements of total thermal conductivity and to the search for materials with low Lorenz numbers, which may provide improved thermoelectric figures of merit, z T .

  15. International fusion materials irradiation facility and neutronic calculations for its test modules

    International Nuclear Information System (INIS)

    Sokcic-Kostic, M.

    1997-01-01

    The International Fusion Material Irradiation Facility (IFMIF) is a projected high intensity neutron source for material testing. Neutron transport calculations for the IFMIF project are performed for variety of here explained reasons. The results of MCNP neutronic calculations for IFMIF test modules with NaK and He cooled high flux test cells are presented in this paper. (author). 3 refs., 2 figs., 3 tabs

  16. Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography

    Directory of Open Access Journals (Sweden)

    Thomas Weidinger

    2016-01-01

    Full Text Available This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs in computed tomography. It is based on local approximations (surrogates of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD.

  17. Hybrid functional calculations of potential hydrogen storage material: Complex dimagnesium iron hydride

    KAUST Repository

    Ul Haq, Bakhtiar

    2014-06-01

    By employing the state of art first principles approaches, comprehensive investigations of a very promising hydrogen storage material, Mg 2FeH6 hydride, is presented. To expose its hydrogen storage capabilities, detailed structural, elastic, electronic, optical and dielectric aspects have been deeply analysed. The electronic band structure calculations demonstrate that Mg2FeH6 is semiconducting material. The obtained results of the optical bandgap (4.19 eV) also indicate that it is a transparent material for ultraviolet light, thus demonstrating its potential for optoelectronics application. The calculated elastic properties reveal that Mg2FeH6 is highly stiff and stable hydride. Finally, the calculated hydrogen (H2) storage capacity (5.47 wt.%) within a reasonable formation energy of -78 kJ mol-1, at room temperature, can be easily achievable, thus making Mg2FeH6 as potential material for practical H2 storage applications. Copyright © 2014, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

  18. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R.

    Science.gov (United States)

    Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia

    2016-01-01

    Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.

  19. PopSc: Computing Toolkit for Basic Statistics of Molecular Population Genetics Simultaneously Implemented in Web-Based Calculator, Python and R.

    Directory of Open Access Journals (Sweden)

    Shi-Yi Chen

    Full Text Available Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i genetic diversity of DNA sequences, (ii statistical tests for neutral evolution, and (iii measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.

  20. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  1. Calculation of statistic estimates of kinetic parameters from substrate uncompetitive inhibition equation using the median method

    Directory of Open Access Journals (Sweden)

    Pedro L. Valencia

    2017-04-01

    Full Text Available We provide initial rate data from enzymatic reaction experiments and tis processing to estimate the kinetic parameters from the substrate uncompetitive inhibition equation using the median method published by Eisenthal and Cornish-Bowden (Cornish-Bowden and Eisenthal, 1974; Eisenthal and Cornish-Bowden, 1974. The method was denominated the direct linear plot and consists in the calculation of the median from a dataset of kinetic parameters Vmax and Km from the Michaelis–Menten equation. In this opportunity we present the procedure to applicate the direct linear plot to the substrate uncompetitive inhibition equation; a three-parameter equation. The median method is characterized for its robustness and its insensibility to outlier. The calculations are presented in an Excel datasheet and a computational algorithm was developed in the free software Python. The kinetic parameters of the substrate uncompetitive inhibition equation Vmax, Km and Ks were calculated using three experimental points from the dataset formed by 13 experimental points. All the 286 combinations were calculated. The dataset of kinetic parameters resulting from this combinatorial was used to calculate the median which corresponds to the statistic estimator of the real kinetic parameters. A comparative statistical analyses between the median method and the least squares was published in Valencia et al. [3].

  2. Package of programs for calculating accidents involving melting of the materials in a fast-reactor vessel

    International Nuclear Information System (INIS)

    Vlasichev, G.N.

    1994-01-01

    Methods for calculating one-dimensional nonstationary temperature distribution in a system of physically coupled materials are described. Six computer programs developed for calculating accident processes for fast reactor core melt are described in the article. The methods and computer programs take into account melting, solidification, and, in some cases, vaporization of materials. The programs perform calculations for heterogeneous systems consisting of materials with arbitrary but constant composition and heat transfer conditions at material boundaries. Additional modules provide calculations of specific conditions of heat transfer between materials, the change in these conditions and configuration of the materials as a result of coolant boiling, melting and movement of the fuel and structural materials, temperature dependences of thermophysical properties of the materials, and heat release in the fuel. 11 refs., 3 figs

  3. Simple method of calculating the transient thermal performance of composite material and its applicable condition

    Institute of Scientific and Technical Information of China (English)

    张寅平; 梁新刚; 江忆; 狄洪发; 宁志军

    2000-01-01

    Degree of mixing of composite material is defined and the condition of using the effective thermal diffusivity for calculating the transient thermal performance of composite material is studied. The analytical result shows that for a prescribed precision of temperature, there is a condition under which the transient temperature distribution in composite material can be calculated by using the effective thermal diffusivity. As illustration, for the composite material whose temperatures of both ends are constant, the condition is presented and the factors affecting the relative error of calculated temperature of composite materials by using effective thermal diffusivity are discussed.

  4. Geant4 calculations for space radiation shielding material Al2O3

    Science.gov (United States)

    Capali, Veli; Acar Yesil, Tolga; Kaya, Gokhan; Kaplan, Abdullah; Yavuz, Mustafa; Tilki, Tahir

    2015-07-01

    Aluminium Oxide, Al2O3 is the most widely used material in the engineering applications. It is significant aluminium metal, because of its hardness and as a refractory material owing to its high melting point. This material has several engineering applications in diverse fields such as, ballistic armour systems, wear components, electrical and electronic substrates, automotive parts, components for electric industry and aero-engine. As well, it is used as a dosimeter for radiation protection and therapy applications for its optically stimulated luminescence properties. In this study, stopping powers and penetrating distances have been calculated for the alpha, proton, electron and gamma particles in space radiation shielding material Al2O3 for incident energies 1 keV - 1 GeV using GEANT4 calculation code.

  5. Implementation of decommissioning materials conditional clearance process to the OMEGA calculation code

    International Nuclear Information System (INIS)

    Zachar, Matej; Necas, Vladimir; Daniska, Vladimir

    2011-01-01

    The activities performed during nuclear installation decommissioning process inevitably lead to the production of large amount of radioactive material to be managed. Significant part of materials has such low radioactivity level that allows them to be released to the environment without any restriction for further use. On the other hand, for materials with radioactivity slightly above the defined unconditional clearance level, there is a possibility to release them conditionally for a specific purpose in accordance with developed scenario assuring that radiation exposure limits for population not to be exceeded. The procedure of managing such decommissioning materials, mentioned above, could lead to recycling and reuse of more solid materials and to save the radioactive waste repository volume. In the paper an a implementation of the process of conditional release to the OMEGA Code is analyzed in details; the Code is used for calculation of decommissioning parameters. The analytical approach in the material parameters assessment, firstly, assumes a definition of radiological limit conditions, based on the evaluation of possible scenarios for conditionally released materials, and their application to appropriate sorter type in existing material and radioactivity flow system. Other calculation procedures with relevant technological or economical parameters, mathematically describing e.g. final radiation monitoring or transport outside the locality, are applied to the OMEGA Code in the next step. Together with limits, new procedures creating independent material stream allow evaluation of conditional material release process during decommissioning. Model calculations evaluating various scenarios with different input parameters and considering conditional release of materials to the environment are performed to verify the implemented methodology. Output parameters and results of the model assessment are presented, discussed and conduced in the final part of the paper

  6. Raw material consumption of the European Union--concept, calculation method, and results.

    Science.gov (United States)

    Schoer, Karl; Weinzettel, Jan; Kovanda, Jan; Giegrich, Jürgen; Lauwigi, Christoph

    2012-08-21

    This article presents the concept, calculation method, and first results of the "Raw Material Consumption" (RMC) economy-wide material flow indicator for the European Union (EU). The RMC measures the final domestic consumption of products in terms of raw material equivalents (RME), i.e. raw materials used in the complete production chain of consumed products. We employed the hybrid input-output life cycle assessment method to calculate RMC. We first developed a highly disaggregated environmentally extended mixed unit input output table and then applied life cycle inventory data for imported products without appropriate representation of production within the domestic economy. Lastly, we treated capital formation as intermediate consumption. Our results show that services, often considered as a solution for dematerialization, account for a significant part of EU raw material consumption, which emphasizes the need to focus on the full production chains and dematerialization of services. Comparison of the EU's RMC with its domestic extraction shows that the EU is nearly self-sufficient in biomass and nonmetallic minerals but extremely dependent on direct and indirect imports of fossil energy carriers and metal ores. This implies an export of environmental burden related to extraction and primary processing of these materials to the rest of the world. Our results demonstrate that internalizing capital formation has significant influence on the calculated RMC.

  7. Statistical mechanics

    CERN Document Server

    Schwabl, Franz

    2006-01-01

    The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...

  8. Geant4 calculations for space radiation shielding material Al2O3

    Directory of Open Access Journals (Sweden)

    Capali Veli

    2015-01-01

    Full Text Available Aluminium Oxide, Al2O3 is the most widely used material in the engineering applications. It is significant aluminium metal, because of its hardness and as a refractory material owing to its high melting point. This material has several engineering applications in diverse fields such as, ballistic armour systems, wear components, electrical and electronic substrates, automotive parts, components for electric industry and aero-engine. As well, it is used as a dosimeter for radiation protection and therapy applications for its optically stimulated luminescence properties. In this study, stopping powers and penetrating distances have been calculated for the alpha, proton, electron and gamma particles in space radiation shielding material Al2O3 for incident energies 1 keV – 1 GeV using GEANT4 calculation code.

  9. Calculating Parameters of Chip Formation and Cutting Forces of Plastic Materials

    Directory of Open Access Journals (Sweden)

    S. V Grubyi

    2017-01-01

    Full Text Available In addition to the kinematics and geometric parameters of the tool, parameters of chip formation and cutting forces lay the groundwork for theoretical analysis of various types of machining.The objective of research activities is to develop a calculation technique to evaluate parameters of chip formation and cutting forces when machining such plastic materials as structural carbon and alloy steels, and aluminum alloys. The subject of research activities is directly a cutting process, algorithms and calculation methods in the field under consideration. A theoretical (calculated method to analyse parameters was used. The results of qualitative and quantitative calculations were compared with the published experimental data.As to the chip formation and cutting forces, a model with a single shear plane is analyzed, which allows a quantitative evaluation of the parameters and of the process factors. Modern domestic and foreign authors’ publications of cutting metals use this model on the reasonable grounds. The novelty of the proposed technique is that calculation of parameters and cutting forces does not require experimental research activities and is based on using the known mechanical characteristics of machined and tool materials. The calculation results are parameters, namely the shear angle, velocity factor of the chip, relative shift, friction coefficient at the front surface, cutting forces, etc. Calculation of these parameters will allow us to pass on to the thermo-physical problems, analysis of tool wear and durability, accuracy, quality and performance rate.The sequence of calculations is arranged in the developed user program in an algorithmic programming language with results in graphical or tabulated view. The calculation technique is a structural component of the cutting theory and is to be used in conducting research activities and engineering calculations in this subject area.

  10. First-principles Electronic Structure Calculations for Scintillation Phosphor Nuclear Detector Materials

    Science.gov (United States)

    Canning, Andrew

    2013-03-01

    Inorganic scintillation phosphors (scintillators) are extensively employed as radiation detector materials in many fields of applied and fundamental research such as medical imaging, high energy physics, astrophysics, oil exploration and nuclear materials detection for homeland security and other applications. The ideal scintillator for gamma ray detection must have exceptional performance in terms of stopping power, luminosity, proportionality, speed, and cost. Recently, trivalent lanthanide dopants such as Ce and Eu have received greater attention for fast and bright scintillators as the optical 5d to 4f transition is relatively fast. However, crystal growth and production costs remain challenging for these new materials so there is still a need for new higher performing scintillators that meet the needs of the different application areas. First principles calculations can provide a useful insight into the chemical and electronic properties of such materials and hence can aid in the search for better new scintillators. In the past there has been little first-principles work done on scintillator materials in part because it means modeling f electrons in lanthanides as well as complex excited state and scattering processes. In this talk I will give an overview of the scintillation process and show how first-principles calculations can be applied to such systems to gain a better understanding of the physics involved. I will also present work on a high-throughput first principles approach to select new scintillator materials for fabrication as well as present more detailed calculations to study trapping process etc. that can limit their brightness. This work in collaboration with experimental groups has lead to the discovery of some new bright scintillators. Work supported by the U.S. Department of Homeland Security and carried out under U.S. Department of Energy Contract no. DE-AC02-05CH11231 at Lawrence Berkeley National Laboratory.

  11. Statistical methods for including two-body forces in large system calculations

    International Nuclear Information System (INIS)

    Grimes, S.M.

    1980-07-01

    Large systems of interacting particles are often treated by assuming that the effect on any one particle of the remaining N-1 may be approximated by an average potential. This approach reduces the problem to that of finding the bound-state solutions for a particle in a potential; statistical mechanics is then used to obtain the properties of the many-body system. In some physical systems this approach may not be acceptable, because the two-body force component cannot be treated in this one-body limit. A technique for incorporating two-body forces in such calculations in a more realistic fashion is described. 1 figure

  12. Statistical properties of material strength for reliability evaluation of components of fast reactors. Austenitic stainless steels

    International Nuclear Information System (INIS)

    Takaya, Shigeru; Sasaki, Naoto; Tomobe, Masato

    2015-03-01

    Many efforts have been made to implement the System Based Code concept of which objective is to optimize margins dispersed in several codes and standards. Failure probability is expected to be a promising quantitative index for optimization of margins, and statistical information for random variables is needed to evaluate failure probability. Material strength like tensile strength is an important random variable, but the statistical information has not been provided enough yet. In this report, statistical properties of material strength such as creep rupture time, steady creep strain rate, yield stress, tensile stress, flow stress, fatigue life and cyclic stress-strain curve, were estimated for SUS304 and 316FR steel, which are typical structural materials for fast reactors. Other austenitic stainless steels like SUS316 were also used for statistical estimation of some material properties such as fatigue life. These materials are registered in the JSME code of design and construction of fast reactors, so test data used for developing the code were used as much as possible in this report. (author)

  13. First Principles Calculations of Electronic Excitations in 2D Materials

    DEFF Research Database (Denmark)

    Rasmussen, Filip Anselm

    electronic transport, optical and chemical properties. On the other hand it has shown to be a great starting point for a systematic pertubation theory approach to obtain the so-called quasiparticle spectrum. In the GW approximation one considers the considers the potential from a charged excitation...... as if it is being screened by the electrons in the material. This method has been very successful for calculating quasiparticle energies of bulk materials but results have been more varying for 2D materials. The reason is that the 2D confined electrons are less able to screen the added charge and some...

  14. Vectorization of nuclear codes for atmospheric transport and exposure calculation of radioactive materials

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Shinozawa, Naohisa; Ishikawa, Hirohiko; Chino, Masamichi; Hayashi, Takashi

    1983-02-01

    Three computer codes MATHEW, ADPIC of LLNL and GAMPUL of JAERI for prediction of wind field, concentration and external exposure rate of airborne radioactive materials are vectorized and the results are presented. Using the continuous equation of incompressible flow as a constraint, the MATHEW calculates the three dimensional wind field by a variational method. Using the particle-in -cell method, the ADPIC calculates the advection and diffusion of radioactive materials in three dimensional wind field and terrain, and gives the concentration of the materials in each cell of the domain. The GAMPUL calculates the external exposure rate assuming Gaussian plume type distribution of concentration. The vectorized code MATHEW attained 7.8 times speedup by a vector processor FACOM230-75 APU. The ADPIC and GAMPUL are estimated to attain 1.5 and 4 times speedup respectively on CRAY-1 type vector processor. (author)

  15. A unified statistical framework for material decomposition using multienergy photon counting x-ray detectors

    International Nuclear Information System (INIS)

    Choi, Jiyoung; Kang, Dong-Goo; Kang, Sunghoon; Sung, Younghun; Ye, Jong Chul

    2013-01-01

    Purpose: Material decomposition using multienergy photon counting x-ray detectors (PCXD) has been an active research area over the past few years. Even with some success, the problem of optimal energy selection and three material decomposition including malignant tissue is still on going research topic, and more systematic studies are required. This paper aims to address this in a unified statistical framework in a mammographic environment.Methods: A unified statistical framework for energy level optimization and decomposition of three materials is proposed. In particular, an energy level optimization algorithm is derived using the theory of the minimum variance unbiased estimator, and an iterative algorithm is proposed for material composition as well as system parameter estimation under the unified statistical estimation framework. To verify the performance of the proposed algorithm, the authors performed simulation studies as well as real experiments using physical breast phantom and ex vivo breast specimen. Quantitative comparisons using various performance measures were conducted, and qualitative performance evaluations for ex vivo breast specimen were also performed by comparing the ground-truth malignant tissue areas identified by radiologists.Results: Both simulation and real experiments confirmed that the optimized energy bins by the proposed method allow better material decomposition quality. Moreover, for the specimen thickness estimation errors up to 2 mm, the proposed method provides good reconstruction results in both simulation and real ex vivo breast phantom experiments compared to existing methods.Conclusions: The proposed statistical framework of PCXD has been successfully applied for the energy optimization and decomposition of three material in a mammographic environment. Experimental results using the physical breast phantom and ex vivo specimen support the practicality of the proposed algorithm

  16. PROFESSIONAL CHALLENGES CONCERNING THE CALCULATION AND USE OF MATERIALITY

    OpenAIRE

    Daniel Botez

    2013-01-01

    Significance is an essential reference in the judgments of the economic environment. It talks about significant influence, meaningful, significant risk, significant accounting policies, and the like. In accounting and auditing is used the term "materiality" when the submit financial information, to evaluate the risk or partial information or investigating events using statistical sampling technique. Starting from the premise that the conceptual and practical approach of the threshold of signi...

  17. Evaluation of calculational and material models for concrete containment structures

    International Nuclear Information System (INIS)

    Dunham, R.S.; Rashid, Y.R.; Yuan, K.A.

    1984-01-01

    A computer code utilizing an appropriate finite element, material and constitutive model has been under development as a part of a comprehensive effort by the Electric Power Research Institute (EPRI) to develop and validate a realistic methodology for the ultimate load analysis of concrete containment structures. A preliminary evaluation of the reinforced and prestressed concrete modeling capabilities recently implemented in the ABAQUS-EPGEN code has been completed. This effort focuses on using a state-of-the-art calculational model to predict the behavior of large-scale reinforced concrete slabs tested under uniaxial and biaxial tension to simulate the wall of a typical concrete containment structure under internal pressure. This paper gives comparisons between calculations and experimental measurements for a uniaxially-loaded specimen. The calculated strains compare well with the measured strains in the reinforcing steel; however, the calculations gave diffused cracking patterns that do not agree with the discrete cracking observed in the experiments. Recommendations for improvement of the calculational models are given. (orig.)

  18. Fracture statistics of brittle materials with intergranular cracks

    International Nuclear Information System (INIS)

    Batdorf, S.B.

    1975-01-01

    When brittle materials are used for structural purposes, the initial design must take their relatively large dispersion in fracture stress properly into account. This is difficult when failure probabilities must be extremely low, because empirically based statistical theories of fracture, such as that of Weibull, cannot reliably predict the stresses corresponding to failure probabilities much lower than n -1 , where n is the number of specimens tested. Recently McClintock proposed a rational method of predicting the size distribution of intergranular cracks. The method assumed that large cracks are random aggregations of cracked grain boundaries. The present paper employs this method to find the size distribution of penny-shaped cracks, and also P(f), the probability of failure of a specimen of volume V subjected to a tensile stress sigma. The present paper is a pioneering effort, which should be applicable to ceramics and related materials

  19. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  20. Predicted phototoxicities of carbon nano-material by quantum mechanical calculations

    Science.gov (United States)

    The purpose of this research is to develop a predictive model for the phototoxicity potential of carbon nanomaterials (fullerenols and single-walled carbon nanotubes). This model is based on the quantum mechanical (ab initio) calculations on these carbon-based materials and compa...

  1. STATISTICAL ANALYSIS OF RAW SUGAR MATERIAL FOR SUGAR PRODUCER COMPLEX

    OpenAIRE

    A. A. Gromkovskii; O. I. Sherstyuk

    2015-01-01

    Summary. In the article examines the statistical data on the development of average weight and average sugar content of sugar beet roots. The successful solution of the problem of forecasting these raw indices is essential for solving problems of sugar producing complex control. In the paper by calculating the autocorrelation function demonstrated that the predominant trend component of the growth raw characteristics. For construct the prediction model is proposed to use an autoregressive fir...

  2. Materials Informatics: Statistical Modeling in Material Science.

    Science.gov (United States)

    Yosipof, Abraham; Shimanovich, Klimentiy; Senderowitz, Hanoch

    2016-12-01

    Material informatics is engaged with the application of informatic principles to materials science in order to assist in the discovery and development of new materials. Central to the field is the application of data mining techniques and in particular machine learning approaches, often referred to as Quantitative Structure Activity Relationship (QSAR) modeling, to derive predictive models for a variety of materials-related "activities". Such models can accelerate the development of new materials with favorable properties and provide insight into the factors governing these properties. Here we provide a comparison between medicinal chemistry/drug design and materials-related QSAR modeling and highlight the importance of developing new, materials-specific descriptors. We survey some of the most recent QSAR models developed in materials science with focus on energetic materials and on solar cells. Finally we present new examples of material-informatic analyses of solar cells libraries produced from metal oxides using combinatorial material synthesis. Different analyses lead to interesting physical insights as well as to the design of new cells with potentially improved photovoltaic parameters. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Design a computational program to calculate the composition variations of nuclear materials in the reactor operations

    International Nuclear Information System (INIS)

    Mohmmadnia, Meysam; Pazirandeh, Ali; Sedighi, Mostafa; Bahabadi, Mohammad Hassan Jalili; Tayefi, Shima

    2013-01-01

    Highlights: ► The atomic densities of light and heavy materials are calculated. ► The solution is obtained using Runge–Kutta–Fehlberg method. ► The material depletion is calculated for constant flux and constant power condition. - Abstract: The present work investigates an appropriate way to calculate the variations of nuclides composition in the reactor core during operations. Specific Software has been designed for this purpose using C#. The mathematical approach is based on the solution of Bateman differential equations using a Runge–Kutta–Fehlberg method. Material depletion at constant flux and constant power can be calculated with this software. The inputs include reactor power, time step, initial and final times, order of Taylor Series to calculate time dependent flux, time unit, core material composition at initial condition (consists of light and heavy radioactive materials), acceptable error criterion, decay constants library, cross sections database and calculation type (constant flux or constant power). The atomic density of light and heavy fission products during reactor operation is obtained with high accuracy as the program outputs. The results from this method compared with analytical solution show good agreements

  4. Calculation Software versus Illustration Software for Teaching Statistics

    DEFF Research Database (Denmark)

    Mortensen, Peter Stendahl; Boyle, Robin G.

    1999-01-01

    As personal computers have become more and more powerful, so have the software packages available to us for teaching statistics. This paper investigates what software packages are currently being used by progressive statistics instructors at university level, examines some of the deficiencies...... of such software, and indicates features that statistics instructors wish to have incorporated in software in the future. The basis of the paper is a survey of participants at ICOTS-5 (the Fifth International Conference on Teaching Statistics). These survey results, combined with the software based papers...

  5. Methodology comparison for gamma-heating calculations in material-testing reactors

    Energy Technology Data Exchange (ETDEWEB)

    Lemaire, M.; Vaglio-Gaudard, C.; Lyoussi, A. [CEA, DEN, DER, Cadarache F-13108 Saint Paul les Durance (France); Reynard-Carette, C. [Aix Marseille Universite, CNRS, Universite de Toulon, IM2NP UMR 7334, 13397, Marseille (France)

    2015-07-01

    The Jules Horowitz Reactor (JHR) is a Material-Testing Reactor (MTR) under construction in the south of France at CEA Cadarache (French Alternative Energies and Atomic Energy Commission). It will typically host about 20 simultaneous irradiation experiments in the core and in the beryllium reflector. These experiments will help us better understand the complex phenomena occurring during the accelerated ageing of materials and the irradiation of nuclear fuels. Gamma heating, i.e. photon energy deposition, is mainly responsible for temperature rise in non-fuelled zones of nuclear reactors, including JHR internal structures and irradiation devices. As temperature is a key parameter for physical models describing the behavior of material, accurate control of temperature, and hence gamma heating, is required in irradiation devices and samples in order to perform an advanced suitable analysis of future experimental results. From a broader point of view, JHR global attractiveness as a MTR depends on its ability to monitor experimental parameters with high accuracy, including gamma heating. Strict control of temperature levels is also necessary in terms of safety. As JHR structures are warmed up by gamma heating, they must be appropriately cooled down to prevent creep deformation or melting. Cooling-power sizing is based on calculated levels of gamma heating in the JHR. Due to these safety concerns, accurate calculation of gamma heating with well-controlled bias and associated uncertainty as low as possible is all the more important. There are two main kinds of calculation bias: bias coming from nuclear data on the one hand and bias coming from physical approximations assumed by computer codes and by general calculation route on the other hand. The former must be determined by comparison between calculation and experimental data; the latter by calculation comparisons between codes and between methodologies. In this presentation, we focus on this latter kind of bias. Nuclear

  6. The simulation calculation of acoustics energy transfer through the material structure

    Directory of Open Access Journals (Sweden)

    Zvolenský Peter

    2016-01-01

    Full Text Available The paper deals with the modification of the rail passenger coach floor design aimed at improvement of sound reduction index. Refurbishing was performed by using a new acoustic material with a filamentary microstructure. The materials proposed in research were compared by simulation calculation of acoustic energy transfer trough porous microstructure of filamentary material, and the effect of material porosity on sound reduction index and sound absorption coefficient were observed. This proposed filamentary material can be used in the railway bed structure, too. High degree of noise absorbing, resistance to climate conditions, low specific mass, enable to choose a system of low anti-noise barriers having similar properties as standard high anti-noise walls..

  7. Forecast of Piezoelectric Properties of Crystalline Materials from First Principles Calculation

    International Nuclear Information System (INIS)

    Zheng Yanqing; Shi Erwei; Chen Jianjun; Zhang Tao; Song Lixin

    2006-01-01

    In this paper, forecast of piezoelectric tensors are presented. Piezo crystals including quartz, quartz-like crystals, known and novel crystals of langasite-type structure are treated with density-functional perturb theory (DFPT) using plane-wave pseudopotentials method, within the local density approximation (LDA) to the exchange-correlation functional. Compared with experimental results, the ab initio calculation results have quantitative or semi-quantitative accuracy. It is shown that first principles calculation opens a door to the search and design of new piezoelectric material. Further application of first principles calculation to forecast the whole piezoelectric properties are also discussed

  8. Fishnet statistics for probabilistic strength and scaling of nacreous imbricated lamellar materials

    Science.gov (United States)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Similar to nacre (or brick masonry), imbricated (or staggered) lamellar structures are widely found in nature and man-made materials, and are of interest for biomimetics. They can achieve high defect insensitivity and fracture toughness, as demonstrated in previous studies. But the probability distribution with a realistic far-left tail is apparently unknown. Here, strictly for statistical purposes, the microstructure of nacre is approximated by a diagonally pulled fishnet with quasibrittle links representing the shear bonds between parallel lamellae (or platelets). The probability distribution of fishnet strength is calculated as a sum of a rapidly convergent series of the failure probabilities after the rupture of one, two, three, etc., links. Each of them represents a combination of joint probabilities and of additive probabilities of disjoint events, modified near the zone of failed links by the stress redistributions caused by previously failed links. Based on previous nano- and multi-scale studies at Northwestern, the strength distribution of each link, characterizing the interlamellar shear bond, is assumed to be a Gauss-Weibull graft, but with a deeper Weibull tail than in Type 1 failure of non-imbricated quasibrittle materials. The autocorrelation length is considered equal to the link length. The size of the zone of failed links at maximum load increases with the coefficient of variation (CoV) of link strength, and also with fishnet size. With an increasing width-to-length aspect ratio, a rectangular fishnet gradually transits from the weakest-link chain to the fiber bundle, as the limit cases. The fishnet strength at failure probability 10-6 grows with the width-to-length ratio. For a square fishnet boundary, the strength at 10-6 failure probability is about 11% higher, while at fixed load the failure probability is about 25-times higher than it is for the non-imbricated case. This is a major safety advantage of the fishnet architecture over particulate

  9. Statistical analysis on hollow and core-shell structured vanadium oxide microspheres as cathode materials for Lithium ion batteries

    Directory of Open Access Journals (Sweden)

    Xing Liang

    2018-06-01

    Full Text Available In this data, the statistical analyses of vanadium oxide microspheres cathode materials are presented for the research article entitled “Statistical analyses on hollow and core-shell structured vanadium oxides microspheres as cathode materials for Lithium ion batteries” (Liang et al., 2017 [1]. This article shows the statistical analyses on N2 adsorption-desorption isotherm and morphology vanadium oxide microspheres as cathode materials for LIBs. Keywords: Adsorption-desorption isotherm, Pore size distribution, SEM images, TEM images

  10. Calculation of Debye-Scherrer diffraction patterns from highly stressed polycrystalline materials

    Energy Technology Data Exchange (ETDEWEB)

    MacDonald, M. J., E-mail: macdonm@umich.edu [Applied Physics Program, University of Michigan, Ann Arbor, Michigan 48109 (United States); SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States); Vorberger, J. [Helmholtz Zentrum Dresden-Rossendorf, 01328 Dresden (Germany); Gamboa, E. J.; Glenzer, S. H.; Fletcher, L. B. [SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States); Drake, R. P. [Climate and Space Sciences and Engineering, Applied Physics, and Physics, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2016-06-07

    Calculations of Debye-Scherrer diffraction patterns from polycrystalline materials have typically been done in the limit of small deviatoric stresses. Although these methods are well suited for experiments conducted near hydrostatic conditions, more robust models are required to diagnose the large strain anisotropies present in dynamic compression experiments. A method to predict Debye-Scherrer diffraction patterns for arbitrary strains has been presented in the Voigt (iso-strain) limit [Higginbotham, J. Appl. Phys. 115, 174906 (2014)]. Here, we present a method to calculate Debye-Scherrer diffraction patterns from highly stressed polycrystalline samples in the Reuss (iso-stress) limit. This analysis uses elastic constants to calculate lattice strains for all initial crystallite orientations, enabling elastic anisotropy and sample texture effects to be modeled directly. The effects of probing geometry, deviatoric stresses, and sample texture are demonstrated and compared to Voigt limit predictions. An example of shock-compressed polycrystalline diamond is presented to illustrate how this model can be applied and demonstrates the importance of including material strength when interpreting diffraction in dynamic compression experiments.

  11. Cluster model calculations of the solid state materials electron structure

    International Nuclear Information System (INIS)

    Pelikan, P.; Biskupic, S.; Banacky, P.; Zajac, A.; Svrcek, A.; Noga, J.

    1997-01-01

    Materials of the general composition ACuO 2 are the parent compounds of so called infinite layer superconductors. In the paper presented the electron structure of the compounds CaCuO 2 , SrCuO2, Ca 0.86 Sr 0.14 CuO2 and Ca 0.26 Sr 0.74 CuO 2 were calculated. The cluster models consisting of 192 atoms were computed using quasi relativistic version of semiempirical INDO method. The obtained results indicate the strong ionicity of Ca/Sr-O bonds and high covalency of Cu-bonds. The width of energy gap at the Fermi level increases as follows: Ca 0.26 Sr 0.74 CuO 2 0.86 Sr 0.14 CuO2 2 . This order correlates with the fact that materials of the composition Ca x Sr 1-x CuO 2 have have the high temperatures of the superconductive transition (up to 110 K). Materials partially substituted by Sr 2+ have also the higher density of states in the close vicinity at the Fermi level that ai the additional condition for the possibility of superconductive transition. It was calculated the strong influence of the vibration motions to the energy gap at the Fermi level. (authors). 1 tabs., 2 figs., 10 refs

  12. A statistical characterization method for damping material properties and its application to structural-acoustic system design

    International Nuclear Information System (INIS)

    Jung, Byung C.; Lee, Doo Ho; Youn, Byeng D.; Lee, Soo Bum

    2011-01-01

    The performance of surface damping treatments may vary once the surface is exposed to a wide range of temperatures, because the performance of viscoelastic damping material is highly dependent on operational temperature. In addition, experimental data for dynamic responses of viscoelastic material are inherently random, which makes it difficult to design a robust damping layout. In this paper a statistical modeling procedure with a statistical calibration method is suggested for the variability characterization of viscoelastic damping material in constrained-layer damping structures. First, the viscoelastic material property is decomposed into two sources: (I) a random complex modulus due to operational temperature variability, and (II) experimental/model errors in the complex modulus. Next, the variability in the damping material property is obtained using the statistical calibration method by solving an unconstrained optimization problem with a likelihood function metric. Two case studies are considered to show the influence of the material variability on the acoustic performances in the structural-acoustic systems. It is shown that the variability of the damping material is propagated to that of the acoustic performances in the systems. Finally, robust and reliable damping layout designs of the two case studies are obtained through the reliability-based design optimization (RBDO) amidst severe variability in operational temperature and the damping material

  13. A calculation of internal kinetic energy and polarizability of compressed argon from the statistical atom model

    NARCIS (Netherlands)

    Seldam, C.A. ten; Groot, S.R. de

    1952-01-01

    From Jensen's and Gombás' modification of the statistical Thomas-Fermi atom model, a theory for compressed atoms is developed by changing the boundary conditions. Internal kinetic energy and polarizability of argon are calculated as functions of pressure. At 1000 atm. an internal kinetic energy of

  14. Calculations on safe storage and transportation of radioactive materials

    Energy Technology Data Exchange (ETDEWEB)

    Hathout, A M; El-Messiry, A M; Amin, E [National Center for Nuclear Safety and Radiation Control and AEA, Cairo (Egypt)

    1997-12-31

    In this work the safe storage and transportation of fresh fuel as a radioactive material studied. Egypt planned ET RR 2 reactor which is of relatively high power and would require adequate handling and transportation. Therefore, the present work is initiated to develop a procedure for safe handling and transportation of radioactive materials. The possibility of reducing the magnitude of radiation transmitted on the exterior of the packages is investigated. Neutron absorbers are used to decrease the neutron flux. Criticality calculations are carried out to ensure the achievement of subcriticality so that the inherent safety can be verified. The discrete ordinate transport code ANISN was used. The results show good agreement with other techniques. 2 figs., 2 tabs.

  15. Calculation of the Doppler broadening of the electron-positron annihilation radiation in defect-free bulk materials

    International Nuclear Information System (INIS)

    Ghosh, V. J.; Alatalo, M.; Asoka-Kumar, P.; Nielsen, B.; Lynn, K. G.; Kruseman, A. C.; Mijnarends, P. E.

    2000-01-01

    Results of a calculation of the Doppler broadening of the positron-electron annihilation radiation and positron lifetimes in a large number of elemental defect-free materials are presented. A simple scheme based on the method of superimposed atoms is used for these calculations. Calculated values of the Doppler broadening are compared with experimental data for a number of elemental materials, and qualitative agreement is obtained. These results provide a database which can be used for characterizing materials and identifying impurity-vacancy complexes. (c) 2000 The American Physical Society

  16. Calculation of intensity factors using weight function theory for a transversely isotropic piezoelectric material

    International Nuclear Information System (INIS)

    Son, In Ho; An, Deuk Man

    2012-01-01

    In fracture mechanics, the weight function can be used for calculating stress intensity factors. In this paper, a two dimensional electroelastic analysis is performed on a transversely isotropic piezoelectric material with an open crack. A plane strain formulation of the piezoelectric problem is solved within the Leknitskii formalism. Weight function theory is extended to piezoelectric materials. The stress intensity factors and electric displacement intensity factor are calculated by the weight function theory

  17. Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations

    International Nuclear Information System (INIS)

    Arimescu, V.E.; Heins, L.

    2001-01-01

    Advances in modeling fuel rod behavior and accumulations of adequate experimental data have made possible the introduction of quantitative methods to estimate the uncertainty of predictions made with best-estimate fuel rod codes. The uncertainty range of the input variables is characterized by a truncated distribution which is typically a normal, lognormal, or uniform distribution. While the distribution for fabrication parameters is defined to cover the design or fabrication tolerances, the distribution of modeling parameters is inferred from the experimental database consisting of separate effects tests and global tests. The final step of the methodology uses a Monte Carlo type of random sampling of all relevant input variables and performs best-estimate code calculations to propagate these uncertainties in order to evaluate the uncertainty range of outputs of interest for design analysis, such as internal rod pressure and fuel centerline temperature. The statistical method underlying this Monte Carlo sampling is non-parametric order statistics, which is perfectly suited to evaluate quantiles of populations with unknown distribution. The application of this method is straightforward in the case of one single fuel rod, when a 95/95 statement is applicable: 'with a probability of 95% and confidence level of 95% the values of output of interest are below a certain value'. Therefore, the 0.95-quantile is estimated for the distribution of all possible values of one fuel rod with a statistical confidence of 95%. On the other hand, a more elaborate procedure is required if all the fuel rods in the core are being analyzed. In this case, the aim is to evaluate the following global statement: with 95% confidence level, the expected number of fuel rods which are not exceeding a certain value is all the fuel rods in the core except only a few fuel rods. In both cases, the thresholds determined by the analysis should be below the safety acceptable design limit. An indirect

  18. Density functional theory and pseudopotentials: A panacea for calculating properties of materials

    International Nuclear Information System (INIS)

    Cohen, M.L.; Lawrence Berkeley Lab., CA

    1995-09-01

    Although the microscopic view of solids is still evolving, for a large class of materials one can construct a useful first-principles or ''Standard Model'' of solids which is sufficiently robust to explain and predict many physical properties. Both electronic and structural properties can be studied and the results of the first-principles calculations can be used to predict new materials, formulate empirical theories and simple formulae to compute material parameters, and explain trends. A discussion of the microscopic approach, applications, and empirical theories is given here, and some recent results on nanotubes, hard materials, and fullerenes are presented

  19. Calculation of the major material parameters of heat carriers for cryogenic heat pipes

    International Nuclear Information System (INIS)

    Molt, W.

    1976-07-01

    In order to make predictions on the efficiency of cryogenic heat pipes, the material parameters of the heat carrier such as surface tension, viscosity, evaporation heat and density of the liquid should be known. The author therefore investigates suitable interpolation methods and equations which enable the calculation of the desired material parameter at a certain temperature from other known quantities or which require that 1 to 3 material parameters at different temperatures are known. The calculations are limited to the temperature between critical temperature and triple point, since this is the only temperature region in which the heat carrier is in its liquid phase. The applicability and exactness of the equations is tested using known experimental data on N 2 , O 2 , CH 4 and partly on CF 4 . (orig./TK) [de

  20. Statistical models for thermal ageing of steel materials in nuclear power plants

    International Nuclear Information System (INIS)

    Persoz, M.

    1996-01-01

    Some category of steel materials in nuclear power plants may be subjected to thermal ageing, whose extent depends on the steel chemical composition and the ageing parameters, i.e. temperature and duration. This ageing affects the 'impact strength' of the materials, which is a mechanical property. In order to assess the residual lifetime of these components, a probabilistic study has been launched, which takes into account the scatter over the input parameters of the mechanical model. Predictive formulae for estimating the impact strength of aged materials are important input data of the model. A data base has been created with impact strength results obtained from an ageing program in laboratory and statistical treatments have been undertaken. Two kinds of model have been developed, with non linear regression methods (PROC NLIN, available in SAS/STAT). The first one, using a hyperbolic tangent function, is partly based on physical considerations, and the second one, of an exponential type, is purely statistically built. The difficulties consist in selecting the significant parameters and attributing initial values to the coefficients, which is a requirement of the NLIN procedure. This global statistical analysis has led to general models that are unction of the chemical variables and the ageing parameters. These models are as precise (if not more) as local models that had been developed earlier for some specific values of ageing temperature and ageing duration. This paper describes the data and the methodology used to build the models and analyses the results given by the SAS system. (author)

  1. Preserved statistical learning of tonal and linguistic material in congenital amusia.

    Science.gov (United States)

    Omigie, Diana; Stewart, Lauren

    2011-01-01

    Congenital amusia is a lifelong disorder whereby individuals have pervasive difficulties in perceiving and producing music. In contrast, typical individuals display a sophisticated understanding of musical structure, even in the absence of musical training. Previous research has shown that they acquire this knowledge implicitly, through exposure to music's statistical regularities. The present study tested the hypothesis that congenital amusia may result from a failure to internalize statistical regularities - specifically, lower-order transitional probabilities. To explore the specificity of any potential deficits to the musical domain, learning was examined with both tonal and linguistic material. Participants were exposed to structured tonal and linguistic sequences and, in a subsequent test phase, were required to identify items which had been heard in the exposure phase, as distinct from foils comprising elements that had been present during exposure, but presented in a different temporal order. Amusic and control individuals showed comparable learning, for both tonal and linguistic material, even when the tonal stream included pitch intervals around one semitone. However analysis of binary confidence ratings revealed that amusic individuals have less confidence in their abilities and that their performance in learning tasks may not be contingent on explicit knowledge formation or level of awareness to the degree shown in typical individuals. The current findings suggest that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors.

  2. Preserved Statistical Learning of Tonal and Linguistic Material in Congenital Amusia

    Directory of Open Access Journals (Sweden)

    Diana eOmigie

    2011-06-01

    Full Text Available Congenital amusia is a lifelong disorder whereby individuals have pervasive difficulties in perceiving and producing music. In contrast, typical individuals display a sophisticated understanding of musical structure, even in the absence of musical training. Previous research has shown that they acquire this knowledge implicitly, through exposure to music’s statistical regularities. The present study tested the hypothesis that congenital amusia may result from a failure to internalize statistical regularities - specifically, lower-order transitional probabilities. To explore the specificity of any potential deficits to the musical domain, learning was examined with both tonal and linguistic material. Participants were exposed to structured tonal and linguistic sequences and, in a subsequent test phase, were required to identify items which had been heard in the exposure phase, as distinct from foils comprising elements that had been present during exposure, but presented in a different temporal order. Amusic and control individuals showed comparable learning, for both tonal and linguistic material, even when the tonal stream included pitch intervals around one semitone. However analysis of binary confidence ratings revealed that amusic individuals have less confidence in their abilities and that their performance in learning tasks may not be contingent on explicit knowledge formation or level of awareness to the degree shown in typical individuals. The current findings suggest that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors.

  3. Radiation damage calculations for the APT materials test program

    International Nuclear Information System (INIS)

    Corzine, R.K.; Wechsler, M.S.; Dudziak, D.J.; Ferguson, P.D.; James, M.R.

    1999-01-01

    A materials irradiation was performed at the Los Alamos Neutron Science Center (LANSCE) in the fall of 1996 and spring of 1997 in support of the Accelerator Production of Tritium (APT) program. Testing of the irradiated materials is underway. In the proposed APT design, materials in the target and blanket are to be exposed to protons and neutrons over a wide range of energies. The irradiation and testing program was undertaken to enlarge the very limited direct knowledge presently available of the effects of medium-energy protons (∼1 GeV) on the properties of engineering materials. APT candidate materials were placed in or near the LANSCE accelerator 800-MeV, 1-mA proton beam and received roughly the same proton current density in the center of the beam as would be the case for the APT facility. As a result, the proton fluences achieved in the irradiation were expected to approach the APT prototypic full-power-year values. To predict accurately the performance of materials in APT, radiation damage parameters for the materials experiment must be determined. By modeling the experiment, calculations for atomic displacement, helium and hydrogen cross sections and for proton and neutron fluences were done for representative samples in the 17A, 18A, and 18C areas. The LAHET code system (LCS) was used to model the irradiation program, LAHET 2.82 within LCS transports protons > 1 MeV, and neutrons >20 MeV. A modified version of MCNP for use in LCS, HMCNP 4A, was employed to tally neutrons of energies <20 MeV

  4. Statistical data for the tensile properties of natural fibre composites

    Directory of Open Access Journals (Sweden)

    J.P. Torres

    2017-06-01

    Full Text Available This article features a large statistical database on the tensile properties of natural fibre reinforced composite laminates. The data presented here corresponds to a comprehensive experimental testing program of several composite systems including: different material constituents (epoxy and vinyl ester resins; flax, jute and carbon fibres, different fibre configurations (short-fibre mats, unidirectional, and plain, twill and satin woven fabrics and different fibre orientations (0°, 90°, and [0,90] angle plies. For each material, ~50 specimens were tested under uniaxial tensile loading. Here, we provide the complete set of stress–strain curves together with the statistical distributions of their calculated elastic modulus, strength and failure strain. The data is also provided as support material for the research article: “The mechanical properties of natural fibre composite laminates: A statistical study” [1].

  5. The nuclear heating calculation scheme for material testing in the future Jules Horowitz Reactor

    International Nuclear Information System (INIS)

    Huot, N.; Aggery, A.; Blanchet, D.; Courcelle, A.; Czernecki, S.; Di-Salvo, J.; Doederlein, C.; Serviere, H.; Willermoz, G.

    2004-01-01

    An innovative nuclear heating calculation scheme for materials testing carried out in in the future Jules Horowitz reactor (JHR) is described. A heterogeneous gamma source calculation is first performed at assembly level using the deterministic code APOLLO2. This is followed by a Monte Carlo gamma transport calculation in the whole core using the TRIPOLI4 code. The calculated gamma sources at the assembly level are applied in the whole core simulation using a weighting based on power distribution obtained from the neutronic core calculation. (authors)

  6. A BRDF statistical model applying to space target materials modeling

    Science.gov (United States)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  7. A finite element computer program for the calculation of the resonant frequencies of anisotropic materials

    International Nuclear Information System (INIS)

    Fleury, W.H.; Rosinger, H.E.; Ritchie, I.G.

    1975-09-01

    A set of computer programs for the calculation of the flexural and torsional resonant frequencies of rectangular section bars of materials of orthotropic or higher symmetry are described. The calculations are used in the experimental determination and verification of the elastic constants of anisotropic materials. The simple finite element technique employed separates the inertial and elastic properties of the beam element into station and field transfer matrices respectively. It includes the Timoshenko beam corrections for flexure and Lekhnitskii's theory for torsion-flexure coupling. The programs also calculate the vibration shapes and surface nodal contours or Chladni figures of the vibration modes. (author)

  8. Statistical Model Calculations for (n,γ Reactions

    Directory of Open Access Journals (Sweden)

    Beard Mary

    2015-01-01

    Full Text Available Hauser-Feshbach (HF cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well-observed result that different nuclear input models sensitively affect HF cross section calculations. Less well known however are the effects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, back-shift and giant dipole parameters, as well as effects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the effects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative effects of these model details will be discussed.

  9. Evaluation of electronic states of implanted materials by molecular orbital calculation

    International Nuclear Information System (INIS)

    Saito, Jun-ichi; Kano, Shigeki

    1997-07-01

    In order to understand the effect of implanted atom in ceramics and metals on the sodium corrosion, the electronic structures of un-implanted and implanted materials were calculated using DV-Xα cluster method which was one of molecular orbital calculations. The calculated materials were β-Si 3 N 4 , α-SiC and β-SiC as ceramics, and f.c.c. Fe, b.c.c. Fe and b.c.c. Nb as metals. An Fe, Mo and Hf atom for ceramics, and N atom for metals were selected as implanted atoms. Consequently, it is expected that the corrosion resistance of β-Si 3 N 4 is improved, because the ionic bonding reduced by the implantation. When the implanted atom is occupied at interstitial site in α-SiC and β-SiC, the ionic bonding reduced. Hence, there is a possibility to improve the corrosion resistance of α-SiC and β-SiC. It is clear that Hf is most effective element among implanted atoms in this study. As the covalent bond between N atom and surrounding Fe atoms increased largely in f.c.c. Fe by N implantation, it was expected that the corrosion resistance of f.c.c. Fe improved in liquid sodium. (J.P.N.)

  10. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  11. Ab Initio Calculation of XAFS Debye-Waller Factors for Crystalline Materials

    Science.gov (United States)

    Dimakis, Nicholas

    2007-02-01

    A direct an accurate technique for calculating the thermal X-ray absorption fine structure (XAFS) Debye-Waller factors (DWF) for materials of crystalline structure is presented. Using the Density Functional Theory (DFT) under the hybrid X3LYP functional, a library of MnO spin—optimized clusters are built and their phonon spectrum properties are calculated; these properties in the form of normal mode eigenfrequencies and eigenvectors are in turn used for calculation of the single and multiple scattering XAFS DWF. DWF obtained via this technique are temperature dependent expressions and can be used to substantially reduce the number of fitting parameters when experimental spectra are fitted with a hypothetical structure without any ad hoc assumptions. Due to the high computational demand a hybrid approach of mixing the DFT calculated DWF with the correlated Debye model for inner and outer shells respectively is presented. DFT obtained DWFs are compared with corresponding values from experimental XAFS spectra on manganosite. The cluster size effect and the spin parameter on the DFT calculated DWFs are discussed.

  12. Recent developments in neutron dosimetry and radiation damage calculations for fusion-materials studies

    International Nuclear Information System (INIS)

    Greenwood, L.R.

    1983-01-01

    This paper is intended as an overview of activities designed to characterize neutron irradiation facilities in terms of neutron flux and energy spectrum and to use these data to calculate atomic displacements, gas production, and transmutation during fusion materials irradiations. A new computerized data file, called DOSFILE, has recently been developed to record dosimetry and damage data from a wide variety of materials test facilities. At present data are included from 20 different irradiations at fast and mixed-spectrum reactors, T(d,n) 14 MeV neutron sources, Be(d,n) broad-spectrum sources, and spallation neutron sources. Each file entry includes activation data, adjusted neutron flux and spectral data, and calculated atomic displacements and gas production. Such data will be used by materials experimenters to determine the exposure of their samples during specific irradiations. This data base will play an important role in correlating property changes between different facilities and, eventually, in predicting materials performance in fusion reactors. All known uncertainties and covariances are listed for each data record and explicit references are given to nuclear decay data and cross sections

  13. High-throughput density functional calculations to optimize properties and interfacial chemistry of piezoelectric materials

    Science.gov (United States)

    Barr, Jordan A.; Lin, Fang-Yin; Ashton, Michael; Hennig, Richard G.; Sinnott, Susan B.

    2018-02-01

    High-throughput density functional theory calculations are conducted to search through 1572 A B O3 compounds to find a potential replacement material for lead zirconate titanate (PZT) that exhibits the same excellent piezoelectric properties as PZT and lacks both its use of the toxic element lead (Pb) and the formation of secondary alloy phases with platinum (Pt) electrodes. The first screening criterion employed a search through the Materials Project database to find A -B combinations that do not form ternary compounds with Pt. The second screening criterion aimed to eliminate potential candidates through first-principles calculations of their electronic structure, in which compounds with a band gap of 0.25 eV or higher were retained. Third, thermodynamic stability calculations were used to compare the candidates in a Pt environment to compounds already calculated to be stable within the Materials Project. Formation energies below or equal to 100 meV/atom were considered to be thermodynamically stable. The fourth screening criterion employed lattice misfit to identify those candidate perovskites that have low misfit with the Pt electrode and high misfit of potential secondary phases that can be formed when Pt alloys with the different A and B components. To aid in the final analysis, dynamic stability calculations were used to determine those perovskites that have dynamic instabilities that favor the ferroelectric distortion. Analysis of the data finds three perovskites warranting further investigation: CsNb O3 , RbNb O3 , and CsTa O3 .

  14. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2014-01-01

    Thoroughly revised and reorganized, the fourth edition presents in-depth coverage of the theory and methods of the most widely used nonparametric procedures in statistical analysis and offers example applications appropriate for all areas of the social, behavioral, and life sciences. The book presents new material on the quantiles, the calculation of exact and simulated power, multiple comparisons, additional goodness-of-fit tests, methods of analysis of count data, and modern computer applications using MINITAB, SAS, and STATXACT. It includes tabular guides for simplified applications of tests and finding P values and confidence interval estimates.

  15. Implementing advanced data analysis techniques in near-real-time materials accounting

    International Nuclear Information System (INIS)

    Markin, J.T.; Baker, A.L.; Shipley, J.P.

    1980-01-01

    Materials accounting for special nuclear material in fuel cycle facilities is implemented more efficiently by applying decision analysis methods, based on estimation and detection theory, to analyze process data for missing material. These methods are incorporated in the computer program DECANAL, which calculates sufficient statistics containing all accounting information, sets decision thresholds, and compares these statistics to the thresholds in testing the hypothesis H 0 of no missing material against the alternative H 1 that material is missing. DECANAL output provides alarm charts indicating the likelihood of missing material and plots of statistics that estimate materials loss. This program is a useful tool for aggregating and testing materials accounting data for timely detection of missing material

  16. Self-assembled peptide nanotubes as electronic materials: An evaluation from first-principles calculations

    International Nuclear Information System (INIS)

    Akdim, Brahim; Pachter, Ruth; Naik, Rajesh R.

    2015-01-01

    In this letter, we report on the evaluation of diphenylalanine (FF), dityrosine (YY), and phenylalanine-tryptophan (FW) self-assembled peptide nanotube structures for electronics and photonics applications. Realistic bulk peptide nanotube material models were used in density functional theory calculations to mimic the well-ordered tubular nanostructures. Importantly, validated functionals were applied, specifically by using a London dispersion correction to model intertube interactions and a range-separated hybrid functional for accurate bandgap calculations. Bandgaps were found consistent with available experimental data for FF, and also corroborate the higher conductance reported for FW in comparison to FF peptide nanotubes. Interestingly, the predicted bandgap for the YY tubular nanostructure was found to be slightly higher than that of FW, suggesting higher conductance as well. In addition, the band structure calculations along the high symmetry line of nanotube axis revealed a direct bandgap for FF. The results enhance our understanding of the electronic properties of these material systems and will pave the way into their application in devices

  17. Teaching Statistics Online Using "Excel"

    Science.gov (United States)

    Jerome, Lawrence

    2011-01-01

    As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…

  18. A Statistics-Based Material Property Analysis to Support TPS Characterization

    Science.gov (United States)

    Copeland, Sean R.; Cozmuta, Ioana; Alonso, Juan J.

    2012-01-01

    Accurate characterization of entry capsule heat shield material properties is a critical component in modeling and simulating Thermal Protection System (TPS) response in a prescribed aerothermal environment. The thermal decomposition of the TPS material during the pyrolysis and charring processes is poorly characterized and typically results in large uncertainties in material properties as inputs for ablation models. These material property uncertainties contribute to large design margins on flight systems and cloud re- construction efforts for data collected during flight and ground testing, making revision to existing models for entry systems more challenging. The analysis presented in this work quantifies how material property uncertainties propagate through an ablation model and guides an experimental test regimen aimed at reducing these uncertainties and characterizing the dependencies between properties in the virgin and charred states for a Phenolic Impregnated Carbon Ablator (PICA) based TPS. A sensitivity analysis identifies how the high-fidelity model behaves in the expected flight environment, while a Monte Carlo based uncertainty propagation strategy is used to quantify the expected spread in the in-depth temperature response of the TPS. An examination of how perturbations to the input probability density functions affect output temperature statistics is accomplished using a Kriging response surface of the high-fidelity model. Simulations are based on capsule configuration and aerothermal environments expected during the Mars Science Laboratory (MSL) entry sequence. We identify and rank primary sources of uncertainty from material properties in a flight-relevant environment, show the dependence on spatial orientation and in-depth location on those uncertainty contributors, and quantify how sensitive the expected results are.

  19. Surface regulated arsenenes as Dirac materials: From density functional calculations

    International Nuclear Information System (INIS)

    Yuan, Junhui; Xie, Qingxing; Yu, Niannian; Wang, Jiafu

    2017-01-01

    Highlights: • The presence of Dirac cones in chemically decorated buckled arsenene AsX (X = CN, NC, NCO, NCS, and NCSe) has been revealed. • First-principles calculations show that all these chemically decorated arsenenes are kinetically stable in defending thermal fluctuations in room temperature. - Abstract: Using first principle calculations based on density functional theory (DFT), we have systematically investigated the structure stability and electronic properties of chemically decorated arsenenes, AsX (X = CN, NC, NCO, NCS and NCSe). Phonon dispersion and formation energy analysis reveal that all the five chemically decorated buckled arsenenes are energetically favorable and could be synthesized. Our study shows that wide-bandgap arsenene would turn into Dirac materials when functionalized by -X (X = CN, NC, NCO, NCS and NCSe) groups, rendering new promises in next generation high-performance electronic devices.

  20. Methodology of external exposure calculation for reuse of conditional released materials from decommissioning - 59138

    International Nuclear Information System (INIS)

    Ondra, Frantisek; Vasko, Marek; Necas, Vladimir

    2012-01-01

    The article presents methodology of external exposure calculation for reuse of conditional released materials from decommissioning using VISIPLAN 3D ALARA planning tool. Production of rails has been used as an example application of proposed methodology within the CONRELMAT project. The article presents a methodology for determination of radiological, material, organizational and other conditions for conditionally released materials reuse to ensure that workers and public exposure does not breach the exposure limits during scenario's life cycle (preparation, construction and operation of scenario). The methodology comprises a proposal of following conditions in the view of workers and public exposure: - radionuclide limit concentration of conditionally released materials for specific scenarios and nuclide vectors, - specific deployment of conditionally released materials eventually shielding materials, workers and public during the scenario's life cycle, - organizational measures concerning time of workers or public stay in the vicinity on conditionally released materials for individual performed scenarios and nuclide vectors. The above mentioned steps of proposed methodology have been applied within the CONRELMAT project. Exposure evaluation of workers for rail production is introduced in the article as an example of this application. Exposure calculation using VISIPLAN 3D ALARA planning tool was done within several models. The most exposed profession for scenario was identified. On the basis of this result, an increase of radionuclide concentration in conditional released material was proposed more than two times to 681 Bq/kg without no additional safety or organizational measures being applied. After application of proposed safety and organizational measures (additional shielding, geometry changes and limitation of work duration) it is possible to increase concentration of radionuclide in conditional released material more than ten times to 3092 Bq/kg. Storage

  1. Algorithm for calculating an availability factor for the inhalation of radioactive and chemical materials

    International Nuclear Information System (INIS)

    1984-02-01

    This report presents a method of calculating the availability of buried radioactive and nonradioactive materials via an inhalation pathway. Availability is the relationship between the concentration of a substance in the soil and the dose rate to a human receptor. Algorithms presented for calculating availabiliy of elemental inorganic substances are based on atmospheric enrichment factors; those presented for calculating availability of organic substances are based on vapor pressures. The basis, use, and limitations of the developed equations are discussed. 32 references, 5 tables

  2. Changes of the calculation equation for σMUF

    International Nuclear Information System (INIS)

    Yoshida, Hideki; Niiyama, Toshitaka; Sonobe, Kentaro

    2002-01-01

    The error variance (σ MUF 2 ) of the material accountancy for the material balance is used for evaluating the MUF of the conventional material accountancy and the near real time material accountancy (NRTA). The σ MUF 2 calculated by the error propagation using the material accounting data and the measurement error. The error propagation equation of σ MUF 2 written on the text of 'The statistical concepts and technique for IAEA safeguards (IAEA/SG/SCT5)'. There are some assumptions in order to simplify the equation. These assumptions are available in the assessment of the facility design. However when the σ MUF 2 of the actual MUF is calculated, it is necessary to drop some assumptions and modify the adapted equation. Furthermore, because the material balance is more frequently taken for NRTA, the inventory of all times cannot be always re-measured at each time. To be solved the matter, the error propagation equation has to be modified. For a reprocessing plant which has material in solution, the equation has been improved to obtain more exact equation. In this paper we present the changes of the error propagation for σ MUF 2 and explain the features. (author)

  3. Multivariate statistical analysis of electron energy-loss spectroscopy in anisotropic materials

    International Nuclear Information System (INIS)

    Hu Xuerang; Sun Yuekui; Yuan Jun

    2008-01-01

    Recently, an expression has been developed to take into account the complex dependence of the fine structure in core-level electron energy-loss spectroscopy (EELS) in anisotropic materials on specimen orientation and spectral collection conditions [Y. Sun, J. Yuan, Phys. Rev. B 71 (2005) 125109]. One application of this expression is the development of a phenomenological theory of magic-angle electron energy-loss spectroscopy (MAEELS), which can be used to extract the isotropically averaged spectral information for materials with arbitrary anisotropy. Here we use this expression to extract not only the isotropically averaged spectral information, but also the anisotropic spectral components, without the restriction of MAEELS. The application is based on a multivariate statistical analysis of core-level EELS for anisotropic materials. To demonstrate the applicability of this approach, we have conducted a study on a set of carbon K-edge spectra of multi-wall carbon nanotube (MWCNT) acquired with energy-loss spectroscopic profiling (ELSP) technique and successfully extracted both the averaged and dichroic spectral components of the wrapped graphite-like sheets. Our result shows that this can be a practical alternative to MAEELS for the study of electronic structure of anisotropic materials, in particular for those nanostructures made of layered materials

  4. Calculation of radiation exposures from patients to whom radioactive materials have been administered

    Science.gov (United States)

    Cormack, John; Shearer, Jane

    1998-03-01

    Spreadsheet templates which calculate cumulative exposures to other persons from patients to whom radioactive materials have been administered have been developed by the authors. Calculations can be based on any specified single-, bi- or tri-exponential whole-body clearance rate and a diurnal (or any other periodic) contact pattern. The time (post-administration) during which close contact should be avoided in order to constrain the radiation exposure and exposure rates to selected limits is also calculated using an iterative technique (Newton's method), and the residual activity at the time when contact can resume is also calculated. These templates find particular application in the calculation of exposures to persons who are in contact with patients who have received for therapeutic purposes. The effect of changing dose limits, contact patterns and using individually derived clearance rates may be readily modelled.

  5. Calculation of radiation exposures from patients to whom radioactive materials have been administered

    International Nuclear Information System (INIS)

    McCormack, J.; Shearer, J.

    1998-01-01

    Spreadsheet templates have been developed by the authors to calculate radiation exposures to others from patients to whom radioactive materials have been administered (or, indeed, from any source of radiation exposure) to be readily calculated. The time during which contact should be avoided, along with the residual activity at resumption of contact is also calculated using an iterative technique. These spreadsheets allow a great deal of flexibility in the specification of clearance rates and close contact patterns for individual patients. Estimates of doses, restriction times and residual activities for 131 l thyrotoxic therapy, for various contact patterns and group of patients, were calculated. The spreadsheets are implemented using Microsoft EXCEL for both PC and Macintosh computers, and are readily available from the authors

  6. Expected performance properties of the ASDEX upgrade toroidal field magnet derived from calculations and materials investigations

    International Nuclear Information System (INIS)

    Streibl, B.; Mukherjee, S.

    1989-11-01

    This is a summary of the TF-magnet calculation results for the 1984 phase-II proposal including supplements (also considering disturbances) of the performance of ASDEX Upgrade. Calculation results are as reliable as the assumptions incorporated, so that investigations of materials and design components were always used to complete the calculations. (orig.) [de

  7. Specification of materials Data for Fire Safety Calculations based on ENV 1992-1-2

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl

    1997-01-01

    of constructions of any concrete exposed to any time of any fire exposure can be calculated.Chapter 4.4 provides information on what should be observed if more general calculation methods are used.Annex A provides some additional information on materials data. This chapter is not a part of the code......The part 1-2 of the Eurocode on Concrete deals with Structural Fire Design.In chapter 3, which is partly written by the author of this paper, some data are given for the development of a few material parameters at high temperatures. These data are intended to represent the worst possible concrete...... to experience form tests on structural specimens based on German siliceous concrete subjected to Standard fire exposure until the time of maximum gas temperature.Chapter 4.3, which is written by the author of this paper, provides a simplified calculation method by means of which the load bearing capacity...

  8. The Dental Trauma Internet Calculator

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; Lauridsen, Eva Fejerskov; Christensen, Søren Steno Ahrensburg

    2012-01-01

    Background/Aim Prediction tools are increasingly used to inform patients about the future dental health outcome. Advanced statistical methods are required to arrive at unbiased predictions based on follow-up studies. Material and Methods The Internet risk calculator at the Dental Trauma Guide...... provides prognoses for teeth with traumatic injuries based on the Copenhagen trauma database: http://www.dentaltraumaguide.org The database includes 2191 traumatized permanent teeth from 1282 patients that were treated at the dental trauma unit at the University Hospital in Copenhagen (Denmark...

  9. Abs-initio, Predictive Calculations for Optoelectronic and Advanced Materials Research

    Science.gov (United States)

    Bagayoko, Diola

    2010-10-01

    Most density functional theory (DFT) calculations find band gaps that are 30-50 percent smaller than the experimental ones. Some explanations of this serious underestimation by theory include self-interaction and the derivative discontinuity of the exchange correlation energy. Several approaches have been developed in the search for a solution to this problem. Most of them entail some modification of DFT potentials. The Green function and screened Coulomb approximation (GWA) is a non-DFT formalism that has led to some improvements. Despite these efforts, the underestimation problem has mostly persisted in the literature. Using the Rayleigh theorem, we describe a basis set and variational effect inherently associated with calculations that employ a linear combination of atomic orbitals (LCAO) in a variational approach of the Rayleigh-Ritz type. This description concomitantly shows a source of large underestimation errors in calculated band gaps, i.e., an often dramatic lowering of some unoccupied energies on account of the Rayleigh theorem as opposed to a physical interaction. We present the Bagayoko, Zhao, and Williams (BZW) method [Phys. Rev. B 60, 1563 (1999); PRB 74, 245214 (2006); and J. Appl. Phys. 103, 096101 (2008)] that systematically avoids this effect and leads (a) to DFT and LDA calculated band gaps of semiconductors in agreement with experiment and (b) theoretical predictions of band gaps that are confirmed by experiment. Unlike most calculations, BZW computations solve, self-consistently, a system of two coupled equations. DFT-BZW calculated effective masses and optical properties (dielectric functions) also agree with measurements. We illustrate ten years of success of the BZW method with its results for GaN, C, Si, 3C-SIC, 4H-SiC, ZnO, AlAs, Ge, ZnSe, w-InN, c-InN, InAs, CdS, AlN and nanostructures. We conclude with potential applications of the BZW method in optoelectronic and advanced materials research.

  10. User manual of FUNF code for fissile material data calculation

    International Nuclear Information System (INIS)

    Zhang, Jingshang

    2006-03-01

    The FUNF code (2005 version) is used to calculate fast neutron reaction data of fissile materials with incident energies from about 1 keV up to 20 MeV. The first version of the FUNF code was completed in 1994. the code has been developed continually since that time and has often been used as an evaluation tool for setting up CENDL and for analyzing the measurements of fissile materials. During these years many improvements have been made. In this manual, the format of the input parameter files and the output files, as well as the functions of flag used in FUNF code, are introduced in detail, and the examples of the format of input parameters files are given. FUNF code consists of the spherical optical model, the Hauser-Feshbach model, and the unified Hauser-Feshbach and exciton model. (authors)

  11. Statistical decay of giant resonances

    International Nuclear Information System (INIS)

    Dias, H.; Teruya, N.; Wolynec, E.

    1986-01-01

    Statistical calculations to predict the neutron spectrum resulting from the decay of Giant Resonances are discussed. The dependence of the resutls on the optical potential parametrization and on the level density of the residual nucleus is assessed. A Hauser-Feshbach calculation is performed for the decay of the monople giant resonance in 208 Pb using the experimental levels of 207 Pb from a recent compilation. The calculated statistical decay is in excelent agreement with recent experimental data, showing that the decay of this resonance is dominantly statistical, as predicted by continuum RPA calculations. (Author) [pt

  12. Statistical decay of giant resonances

    International Nuclear Information System (INIS)

    Dias, H.; Teruya, N.; Wolynec, E.

    1986-02-01

    Statistical calculations to predict the neutron spectrum resulting from the decay of Giant Resonances are discussed. The dependence of the results on the optical potential parametrization and on the level density of the residual nucleus is assessed. A Hauser-Feshbach calculation is performed for the decay of the monopole giant resonance in 208 Pb using the experimental levels of 207 Pb from a recent compilation. The calculated statistical decay is in excellent agreement with recent experimental data, showing that decay of this resonance is dominantly statistical, as predicted by continuum RPA calculations. (Author) [pt

  13. Calculation of sound propagation in fibrous materials

    DEFF Research Database (Denmark)

    Tarnow, Viggo

    1996-01-01

    Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements.......Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements....

  14. A New Thermodynamic Calculation Method for Binary Alloys: Part I: Statistical Calculation of Excess Functions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The improved form of calculation formula for the activities of the components in binary liquids and solid alloys has been derived based on the free volume theory considering excess entropy and Miedema's model for calculating the formation heat of binary alloys. A calculation method of excess thermodynamic functions for binary alloys, the formulas of integral molar excess properties and partial molar excess properties for solid ordered or disordered binary alloys have been developed. The calculated results are in good agreement with the experimental values.

  15. 40 CFR 1065.602 - Statistics.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Statistics. 1065.602 Section 1065.602... PROCEDURES Calculations and Data Requirements § 1065.602 Statistics. (a) Overview. This section contains equations and example calculations for statistics that are specified in this part. In this section we use...

  16. Statistical theory of turbulent incompressible multimaterial flow

    International Nuclear Information System (INIS)

    Kashiwa, B.

    1987-10-01

    Interpenetrating motion of incompressible materials is considered. ''Turbulence'' is defined as any deviation from the mean motion. Accordingly a nominally stationary fluid will exhibit turbulent fluctuations due to a single, slowly moving sphere. Mean conservation equations for interpenetrating materials in arbitrary proportions are derived using an ensemble averaging procedure, beginning with the exact equations of motion. The result is a set of conservation equations for the mean mass, momentum and fluctuational kinetic energy of each material. The equation system is at first unclosed due to integral terms involving unknown one-point and two-point probability distribution functions. In the mean momentum equation, the unclosed terms are clearly identified as representing two physical processes. One is transport of momentum by multimaterial Reynolds stresses, and the other is momentum exchange due to pressure fluctuations and viscous stress at material interfaces. Closure is approached by combining careful examination of multipoint statistical correlations with the traditional physical technique of κ-ε modeling for single-material turbulence. This involves representing the multimaterial Reynolds stress for each material as a turbulent viscosity times the rate of strain based on the mean velocity of that material. The multimaterial turbulent viscosity is related to the fluctuational kinetic energy κ, and the rate of fluctuational energy dissipation ε, for each material. Hence a set of κ and ε equations must be solved, together with mean mass and momentum conservation equations, for each material. Both κ and the turbulent viscosities enter into the momentum exchange force. The theory is applied to (a) calculation of the drag force on a sphere fixed in a uniform flow, (b) calculation of the settling rate in a suspension and (c) calculation of velocity profiles in the pneumatic transport of solid particles in a pipe

  17. Brief note on the statistical calculation of final continuum reaction cross sections of light nuclides

    International Nuclear Information System (INIS)

    Murata, Toru

    2003-01-01

    The level density parameters are determined to reproduce level structure and/or resonance level spacing of the nucleus. In the statistical compound nucleus model, cross sections to discrete levels decrease abruptly, and continuum level cross section increase strongly above the energy point where the continuum levels switched on. In the present study, for the nucleus which level scheme were well determined up to higher excitation energy more than 10 MeV, discrete level cross sections were calculated and summed up and compared with the cross section to the assumed continuum level corresponding to the discrete levels above several MeV excitation energy. Calculation of the (n, n') cross sections were made with CASTHY code of Moldauer model option using level density parameters determined with former method. It is shown that the assumed continuum cross section is fairly large compared with the summed up cross section. Origins of the discrepancy were discussed. (J.P.N.)

  18. The statistical model calculation of prompt neutron spectra from spontaneous fission of {sup 244}Cm and {sup 246}Cm

    Energy Technology Data Exchange (ETDEWEB)

    Gerasimenko, B.F. [V.G. Khlopin Radium Inst., Saint Peterburg (Russian Federation)

    1997-03-01

    The calculations of integral spectra of prompt neutrons of spontaneous fission of {sup 244}Cm and {sup 246}Cm were carried out. The calculations were done by the Statistical Computer Code Complex SCOFIN applying the Hauser-Feschbach method as applied to the description of the de-excitation of excited fission fragments by means of neutron emission. The emission of dipole gamma-quanta from these fragments was considered as a competing process. The average excitation energy of a fragment was calculated by two-spheroidal model of tangent fragments. The density of levels in an excited fragment was calculated by the Fermi-gas model. The quite satisfactory agreement was reached between theoretical and experimental results obtained in frames of Project measurements. The calculated values of average multiplicities of neutron number were 2,746 for {sup 244}Cm and 2,927 for {sup 246}Cm that was in a good accordance with published experimental figures. (author)

  19. Possibilities to improve the adaptation quality of calculated material substitutes

    Energy Technology Data Exchange (ETDEWEB)

    Geske, G.

    1981-04-01

    In calculating the composition of material substitutes by a system of simultaneous equations it is possible, by using a so called quality index, to find out of the set of solutions which generally exists that solution which possesses the best adaptation quality. Further improvement is often possible by describing coherent scattering and photoelectric interaction by an own material parameter for each effect. The exact formulation of these quantities as energy indepedent functions is, however, impossible. Using a set of attenuation coefficients at suitably chosen energies as coefficients for the system of equations the best substitutes are found. The solutions for the investigated example are identical with the original relative to its chemical composition. Such solutions may be of use in connection with neutrons, protons, heavy ions and negative pions. The components taken into consideration must, of course, permit such solutions. These facts are discussed in detail by two examples.

  20. Repulsive energy and the Grueneisen parameter of alkali halides calculated on the basis of a quantum-statistical ab initio theory

    International Nuclear Information System (INIS)

    Kucharczyk, M.; Olszewski, S.

    1982-01-01

    The Grueneisen parameter of alkali halides is calculated by an ab initio quantum-statistical method and then compared with the experimental data. The crystal model applied assumes the crystal ions to be compressible but impenetrable spheres. The ions are described with the aid of a modified Thomas-Fermi theory with exchange. At the next step it is possible to calculate the energy needed to transform the system of the non-interacting ions into the ionic system represented by the crystal lattice. This calculation allows for an ab initio estimate of the parameters entering the Born, or the Born-Mayer, repulsive part of the crystal energy. The parameters are then used in the calculation of the Grueneisen parameter and its dependence on the crystal compression. (author)

  1. Statistical Analysis of Clinical Data on a Pocket Calculator, Part 2 Statistics on a Pocket Calculator, Part 2

    CERN Document Server

    Cleophas, Ton J

    2012-01-01

    The first part of this title contained all statistical tests relevant to starting clinical investigations, and included tests for continuous and binary data, power, sample size, multiple testing, variability, confounding, interaction, and reliability. The current part 2 of this title reviews methods for handling missing data, manipulated data, multiple confounders, predictions beyond observation, uncertainty of diagnostic tests, and the problems of outliers. Also robust tests, non-linear modeling , goodness of fit testing, Bhatacharya models, item response modeling, superiority testing, variab

  2. Calculations of radiation damage in target, container and window materials for spallation neutron sources

    International Nuclear Information System (INIS)

    Wechsler, M.S.; Mansur, L.K.

    1996-01-01

    Radiation damage in target, container, and window materials for spallation neutron sources is am important factor in the design of target stations for accelerator-driver transmutation technologies. Calculations are described that use the LAHET and SPECTER codes to obtain displacement and helium production rates in tungsten, 316 stainless steel, and Inconel 718, which are major target, container, and window materials, respectively. Results are compared for the three materials, based on neutron spectra for NSNS and ATW spallation neutron sources, where the neutron fluxes are normalized to give the same flux of neutrons of all energies

  3. Definition of airflow rate induced by polifractional materials

    Science.gov (United States)

    Popov, E. N.

    2018-03-01

    This paper deals with further analysis of a probabilistic and statistical approach to determine the aerodynamic drag coefficient of particles in a flow of free-falling polifractional material. It also describes the experimental assembly enabling one to determine airflow rate induced by polifractional material and provides comparison of analytical calculations with experimental data.

  4. Heterogeneous Rock Simulation Using DIP-Micromechanics-Statistical Methods

    Directory of Open Access Journals (Sweden)

    H. Molladavoodi

    2018-01-01

    Full Text Available Rock as a natural material is heterogeneous. Rock material consists of minerals, crystals, cement, grains, and microcracks. Each component of rock has a different mechanical behavior under applied loading condition. Therefore, rock component distribution has an important effect on rock mechanical behavior, especially in the postpeak region. In this paper, the rock sample was studied by digital image processing (DIP, micromechanics, and statistical methods. Using image processing, volume fractions of the rock minerals composing the rock sample were evaluated precisely. The mechanical properties of the rock matrix were determined based on upscaling micromechanics. In order to consider the rock heterogeneities effect on mechanical behavior, the heterogeneity index was calculated in a framework of statistical method. A Weibull distribution function was fitted to the Young modulus distribution of minerals. Finally, statistical and Mohr–Coulomb strain-softening models were used simultaneously as a constitutive model in DEM code. The acoustic emission, strain energy release, and the effect of rock heterogeneities on the postpeak behavior process were investigated. The numerical results are in good agreement with experimental data.

  5. Transient finite element magnetic field calculation method in the anisotropic magnetic material based on the measured magnetization curves

    International Nuclear Information System (INIS)

    Jesenik, M.; Gorican, V.; Trlep, M.; Hamler, A.; Stumberger, B.

    2006-01-01

    A lot of magnetic materials are anisotropic. In the 3D finite element method calculation, anisotropy of the material is taken into account. Anisotropic magnetic material is described with magnetization curves for different magnetization directions. The 3D transient calculation of the rotational magnetic field in the sample of the round rotational single sheet tester with circular sample considering eddy currents is made and compared with the measurement to verify the correctness of the method and to analyze the magnetic field in the sample

  6. Statistical considerations of graphite strength for assessing design allowable stresses

    International Nuclear Information System (INIS)

    Ishihara, M.; Mogi, H.; Ioka, I.; Arai, T.; Oku, T.

    1987-01-01

    Several aspects of statistics need to be considered to determine design allowable stresses for graphite structures. These include: 1) Statistical variation of graphite material strength. 2) Uncertainty of calculated stress. 3) Reliability (survival probability) required from operational and safety performance of graphite structures. This paper deals with some statistical considerations of structural graphite for assessing design allowable stress. Firstly, probability distribution functions of tensile and compressive strengths are investigated on experimental Very High Temperature candidated graphites. Normal, logarithmic normal and Weibull distribution functions are compared in terms of coefficient of correlation to measured strength data. This leads to the adaptation of normal distribution function. Then, the relation between factor of safety and fracture probability is discussed on the following items: 1) As the graphite strength is more variable than metalic material's strength, the effect of strength variation to the fracture probability is evaluated. 2) Fracture probability depending on survival probability of 99 ∼ 99.9 (%) with confidence level of 90 ∼ 95 (%) is discussed. 3) As the material properties used in the design analysis are usually the mean values of their variation, the additional effect of these variations on the fracture probability is discussed. Finally, the way to assure the minimum ultimate strength with required survival probability with confidence level is discussed in view of statistical treatment of the strength data from varying sample numbers in a material acceptance test. (author)

  7. Decay heat measurement on fusion reactor materials and validation of calculation code system

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Fujio; Ikeda, Yujiro; Wada, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Decay heat rates for 32 fusion reactor relevant materials irradiated with 14-MeV neutrons were measured for the cooling time period between 1 minute and 400 days. With using the experimental data base, validity of decay heat calculation systems for fusion reactors were investigated. (author)

  8. Cross section for calculating the helium formation rate in construction materials irradiated by nucleons at energies to 800 MeV

    International Nuclear Information System (INIS)

    Konobeev, A.Yu.; Korovin, Yu.A.

    1992-01-01

    Recently, effects related to the formation of helium in irradiated construction materials have been studied extensively. Data on the nuclear cross sections for producing helium in these materials form the initial information necessary for such investigations. If the spectrum of the incoming particles is known, the value of the helium production cross section makes it possible to calculate the helium generation rate. In recent years, plans and simulating experiments on radiating materials with high-energy particles made it necessary to determine the helium production cross sections in constructionmaterials, which are irradiated by protons and neutrons with energies to 800 MeV. Helium-formation cross sections have been calculated at these energies. However, a correct description of the experimental data for various construction materials does not yet exist. For example, the calculated helium-formation cross sections turned out to overestimate the experimental data, and to underestimate the experimental data. The objective here is to calculate the helium-formation cross sections for various construction materials, which are irradiated by protons and neutrons to energies from 20 to 800 MeV, and to analyze the probable causes of deviations between experimental and earlier calculated cross sections

  9. Two-group k-eigenvalue benchmark calculations for planar geometry transport in a binary stochastic medium

    International Nuclear Information System (INIS)

    Davis, I.M.; Palmer, T.S.

    2005-01-01

    Benchmark calculations are performed for neutron transport in a two material (binary) stochastic multiplying medium. Spatial, angular, and energy dependence are included. The problem considered is based on a fuel assembly of a common pressurized water reactor. The mean chord length through the assembly is determined and used as the planar geometry system length. According to assumed or calculated material distributions, this system length is populated with alternating fuel and moderator segments of random size. Neutron flux distributions are numerically computed using a discretized form of the Boltzmann transport equation employing diffusion synthetic acceleration. Average quantities (group fluxes and k-eigenvalue) and variances are calculated from an ensemble of realizations of the mixing statistics. The effects of varying two parameters in the fuel, two different boundary conditions, and three different sets of mixing statistics are assessed. A probability distribution function (PDF) of the k-eigenvalue is generated and compared with previous research. Atomic mix solutions are compared with these benchmark ensemble average flux and k-eigenvalue solutions. Mixing statistics with large standard deviations give the most widely varying ensemble solutions of the flux and k-eigenvalue. The shape of the k-eigenvalue PDF qualitatively agrees with previous work. Its overall shape is independent of variations in fuel cross-sections for the problems considered, but its width is impacted by these variations. Statistical distributions with smaller standard deviations alter the shape of this PDF toward a normal distribution. The atomic mix approximation yields large over-predictions of the ensemble average k-eigenvalue and under-predictions of the flux. Qualitatively correct flux shapes are obtained in some cases. These benchmark calculations indicate that a model which includes higher statistical moments of the mixing statistics is needed for accurate predictions of binary

  10. The role of ab initio electronic structure calculations in studies of the strength of materials

    International Nuclear Information System (INIS)

    Sob, M.; Friak, M.; Legut, D.; Fiala, J.; Vitek, V.

    2004-01-01

    In this paper we give an account of applications of quantum-mechanical (first-principles) electronic structure calculations to the problem of theoretical tensile strength in metals and intermetallics. First, we review previous as well as ongoing research on this subject. We then describe briefly the electronic structure calculational methods and simulation of the tensile test. This approach is then illustrated by calculations of theoretical tensile strength in iron and in the intermetallic compound Ni 3 Al. The anisotropy of calculated tensile strength is explained in terms of higher-symmetry structures encountered along the deformation paths studied. The table summarizing values of theoretical tensile strengths calculated up to now is presented and the role of ab initio electronic structure calculations in contemporary studies of the strength of material is discussed

  11. The GNASH preequilibrium-statistical nuclear model code

    International Nuclear Information System (INIS)

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on 58 Ni. the second illustrates the fission model capabilities implemented in the code and involves n + 235 U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs

  12. Ab initio density-functional calculations in materials science: from quasicrystals over microporous catalysts to spintronics.

    Science.gov (United States)

    Hafner, Jürgen

    2010-09-29

    During the last 20 years computer simulations based on a quantum-mechanical description of the interactions between electrons and atomic nuclei have developed an increasingly important impact on materials science, not only in promoting a deeper understanding of the fundamental physical phenomena, but also enabling the computer-assisted design of materials for future technologies. The backbone of atomic-scale computational materials science is density-functional theory (DFT) which allows us to cast the intractable complexity of electron-electron interactions into the form of an effective single-particle equation determined by the exchange-correlation functional. Progress in DFT-based calculations of the properties of materials and of simulations of processes in materials depends on: (1) the development of improved exchange-correlation functionals and advanced post-DFT methods and their implementation in highly efficient computer codes, (2) the development of methods allowing us to bridge the gaps in the temperature, pressure, time and length scales between the ab initio calculations and real-world experiments and (3) the extension of the functionality of these codes, permitting us to treat additional properties and new processes. In this paper we discuss the current status of techniques for performing quantum-based simulations on materials and present some illustrative examples of applications to complex quasiperiodic alloys, cluster-support interactions in microporous acid catalysts and magnetic nanostructures.

  13. Statistical estimation of fast-reactor fuel-element lifetime

    International Nuclear Information System (INIS)

    Proshkin, A.A.; Likhachev, Yu.I.; Tuzov, A.N.; Zabud'ko, L.M.

    1980-01-01

    On the basis of a statistical analysis, the main parameters having a significant influence on the theoretical determination of fuel-element lifetimes in the operation of power fast reactors in steady power conditions are isolated. These include the creep and swelling of the fuel and shell materials, prolonged-plasticity lag, shell-material corrosion, gap contact conductivity, and the strain diagrams of the shell and fuel materials obtained for irradiated materials at the corresponding strain rates. By means of deeper investigation of these properties of the materials, it is possible to increase significantly the reliability of fuel-element lifetime predictions in designing fast reactors and to optimize the structure of fuel elements more correctly. The results of such calculations must obviously be taken into account in the cost-benefit analysis of projected new reactors and in choosing the optimal fuel burnup. 9 refs

  14. Statistical and low dose response

    International Nuclear Information System (INIS)

    Thorson, M.R.; Endres, G.W.R.

    1981-01-01

    The low dose response and the lower limit of detection of the Hanford dosimeter depend upon may factors, including the energy of the radiation, whether the exposure is to be a single radiation or mixed fields, annealing cycles, environmental factors, and how well various batches of TLD materials are matched in the system. A careful statistical study and sensitivity analysis were performed to determine how these factors influence the response of the dosimeter system. Estimates have been included in this study of the standard deviation of calculated dose for various mixed field exposures from 0 to 1000 mrem

  15. Utilization of a statistical procedure for DNBR calculation and in the survey of reactor protection limits

    International Nuclear Information System (INIS)

    Pontedeiro, A.C.; Camargo, C.T.M.; Galetti, M.R. da Silva.

    1987-01-01

    A new procedure is applied to Angra 1 NPP, which is related to DNBR calculations, considering the design parameters statistically: Improved Thermal Design Procedure (ITDP). The ITDP application leads to the determination of uncertainties in the input parameters, the sensitivity factors on DNBR. The DNBR limit and new reactor protection limits. This was done to Angra 1 with the subchannel code COBRA-IIIP. The analysis of limiting accident in terms of DNB confirmed a gain in DNBR margin, and greater operation flexibility of the plant, decreasing unnecessary trips of the reactor. (author) [pt

  16. Understanding advanced statistical methods

    CERN Document Server

    Westfall, Peter

    2013-01-01

    Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...

  17. A theoretical and practical clarification on the calculation of reflection loss for microwave absorbing materials

    Science.gov (United States)

    Liu, Ying; Zhao, Kun; Drew, Michael G. B.; Liu, Yue

    2018-01-01

    Reflection loss is usually calculated and reported as a function of the thickness of microwave absorption material. However, misleading results are often obtained since the principles imbedded in the popular methods contradict the fundamental facts that electromagnetic waves cannot be reflected in a uniform material except when there is an interface and that there are important differences between the concepts of characteristic impedance and input impedance. In this paper, these inconsistencies have been analyzed theoretically and corrections provided. The problems with the calculations indicate a gap between the background knowledge of material scientists and microwave engineers and for that reason a concise review of transmission line theory is provided along with the mathematical background needed for a deeper understanding of the theory of reflection loss. The expressions of gradient, divergence, Laplacian, and curl operators in a general orthogonal coordinate system have been presented including the concept of reciprocal vectors. Gauss's and Stokes's theorems have been related to Green's theorem in a novel way.

  18. Methods for Melting Temperature Calculation

    Science.gov (United States)

    Hong, Qi-Jun

    Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which

  19. Statistical investigations into the erosion of material from the tool in micro-electrical discharge machining

    DEFF Research Database (Denmark)

    Puthumana, Govindan

    2018-01-01

    This paper presents a statistical study of the erosion of material from the tool electrode in a micro-electrical discharge machining process. The work involves analysis of variance and analysis of means approaches on the results of the tool electrode wear rate obtained based on design...... current (Id) and discharge frequency (fd) control the erosion of material from the tool electrode. The material erosion from the tool electrode (Me) increases linearly with the discharge frequency. As the current index increases from 20 to 35, the Me decreases linearly by 29%, and then increases by of 36......%. The current index of 35 gives the minimum material erosion from the tool. It is observed that none of the two-factor interactions are significant in controlling the erosion of the material from the tool....

  20. Statistical calculation of complete events in medium-energy nuclear collisions

    International Nuclear Information System (INIS)

    Randrup, J.

    1984-01-01

    Several heavy-ion accelerators throughout the world are presently able to deliver beams of heavy nuclei with kinetic energies in the range from tens to hundreds of MeV per nucleon, the so-called medium or intermediate energy range. At such energies a large number of final channels are open, each consisting of many nuclear fragments. The disassembly of the collision system is expected to be a very complicated process and a detailed dynamical description is beyond their present capability. However, by virtue of the complexity of the process, statistical considerations may be useful. A statistical description of the disassembly yields the least biased expectations about the outcome of a collision process and provides a meaningful reference against which more specific dynamical models, as well as the data, can be discussed. This lecture presents the essential tools for formulating a statistical model for the nuclear disassembly process. The authors consider the quick disassembly (explosion) of a hot nuclear system, a so-called source, into multifragment final states, which complete according to their statistical weight. First some useful notation is introduced. Then the expressions for exclusive and inclusive distributions are given and the factorization of an exclusive distribution into inclusive ones is carried out. In turn, the grand canonical approximation for one-fragment inclusive distributions is introduced. Finally, it is outlined how to generate a statistical sample of complete final states. On this basis, a model for statistical simulation of complete events in medium-energy nuclear collisions has been developed

  1. Calculated and experimental definition of neutron-physical and temperature conditions of material testing in the SM reactor

    International Nuclear Information System (INIS)

    Toporova, V.G.; Pimenov, V.V.

    2004-01-01

    Full text: Reactor material science is one of the main scientific directions of the RIAR activities. Particularly, a wide range of materials and products testing under irradiation is performed in reactor facility SM (RF SM). To solve the tasks specified in the technical specification for an experiment, previously, the test conditions are chosen. At the minimum a space-energy distribution of neutrons and heating rate in the materials under test are important as well as temperature conditions of irradiation. The up-to-date software and libraries of nuclear data allow modeling of neutron-material interaction processes to a considerable degree of details and also obtaining a true neutron distribution by calculation methods. As a result of a great scope of work on verification, a calculation model, developed on the basis of a package of applied software MCU (option MCU-4/SM22) and analogue Monte-Carlo method, is widely used at RIAR. The MCU geometric module makes it possible to model the SM core and reflector in three-dimensional geometry with sufficient accuracy and to describe all elements of the channel structure and irradiation device with specimens. The calculation model of RF SM is tested using the results of activation experiments performed in its critical assembly, geometric parameters and structural materials of which correspond completely with the prototype. The difference in the calculated and experimental values is less than 2.5%. Possibilities of the calculated estimation of operating temperature conditions of absorbing elements under irradiation should be considered separately. As the conducted calculations and their analysis show, to define the fuel column temperature correctly, one needs reliable data on thermal-physical parameters of materials, especially ceramic ones, such as titanium, dysprosium or boron carbide. This is very important for boron carbide-absorbing elements for actually all their operation parameters (such as: gas release, swelling

  2. Biases and statistical errors in Monte Carlo burnup calculations: an unbiased stochastic scheme to solve Boltzmann/Bateman coupled equations

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C.M.

    2011-01-01

    External linking scripts between Monte Carlo transport codes and burnup codes, and complete integration of burnup capability into Monte Carlo transport codes, have been or are currently being developed. Monte Carlo linked burnup methodologies may serve as an excellent benchmark for new deterministic burnup codes used for advanced systems; however, there are some instances where deterministic methodologies break down (i.e., heavily angularly biased systems containing exotic materials without proper group structure) and Monte Carlo burn up may serve as an actual design tool. Therefore, researchers are also developing these capabilities in order to examine complex, three-dimensional exotic material systems that do not contain benchmark data. Providing a reference scheme implies being able to associate statistical errors to any neutronic value of interest like k(eff), reaction rates, fluxes, etc. Usually in Monte Carlo, standard deviations are associated with a particular value by performing different independent and identical simulations (also referred to as 'cycles', 'batches', or 'replicas'), but this is only valid if the calculation itself is not biased. And, as will be shown in this paper, there is a bias in the methodology that consists of coupling transport and depletion codes because Bateman equations are not linear functions of the fluxes or of the reaction rates (those quantities being always measured with an uncertainty). Therefore, we have to quantify and correct this bias. This will be achieved by deriving an unbiased minimum variance estimator of a matrix exponential function of a normal mean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. Numerical tests will be performed with an ad hoc Monte Carlo code on a very simple depletion case and will be compared to the theoretical results obtained with the reference scheme. Finally, the statistical error propagation

  3. Calculation of coal power plant cost on agricultural and material building impact of emission

    International Nuclear Information System (INIS)

    Mochamad Nasrullah; Wiku Lulus Widodo

    2016-01-01

    Calculation for externally cost of Coal Power Plant (CPP) is very important. This paper is focus on CPP appear SO 2 impact on agricultural plant and material building. AGRIMAT'S model from International Atomic Energy Agency is model one be used to account environmental damage for air impact because SO 2 emission. Analysis method use Impact Pathways Assessment: Determining characteristic source, Exposure Response Functions (ERF), Impacts and Damage Costs, and Monetary Unit Cost. Result for calculate shows that SO 2 that issued CPP, if value of SO 2 is 19,3 μg/m3, damage cost begins valuably positive. It shows that the land around CPP has decrease prosperity, and it will disadvantage for agricultural plant. On material building, SO 2 resulting damage cost. The increase humidity price therefore damage cost on material building will increase cost. But if concentration SO 2 increase therefore damage cost that is appear on material building decrease. Expected this result can added with external cost on health impact of CPP. External cost was done at developed countries. If it is done at Indonesia, therefore generation cost with fossil as more expensive and will get implication on issue cut back gases greenhouse. On the other side, renewable energy and also alternative energy as nuclear have opportunity at national energy mix system. (author)

  4. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  5. EMPIRE-II statistical model code for nuclear reaction calculations

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M [International Atomic Energy Agency, Vienna (Austria)

    2001-12-15

    EMPIRE II is a nuclear reaction code, comprising various nuclear models, and designed for calculations in the broad range of energies and incident particles. A projectile can be any nucleon or Heavy Ion. The energy range starts just above the resonance region, in the case of neutron projectile, and extends up to few hundreds of MeV for Heavy Ion induced reactions. The code accounts for the major nuclear reaction mechanisms, such as optical model (SCATB), Multistep Direct (ORION + TRISTAN), NVWY Multistep Compound, and the full featured Hauser-Feshbach model. Heavy Ion fusion cross section can be calculated within the simplified coupled channels approach (CCFUS). A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers (BARFIT), moments of inertia (MOMFIT), and {gamma}-ray strength functions. Effects of the dynamic deformation of a fast rotating nucleus can be taken into account in the calculations. The results can be converted into the ENDF-VI format using the accompanying code EMPEND. The package contains the full EXFOR library of experimental data. Relevant EXFOR entries are automatically retrieved during the calculations. Plots comparing experimental results with the calculated ones can be produced using X4TOC4 and PLOTC4 codes linked to the rest of the system through bash-shell (UNIX) scripts. The graphic user interface written in Tcl/Tk is provided. (author)

  6. OFFSITE RADIOLOGICAL CONSEQUENCE CALCULATION FOR THE BOUNDING MIXING OF INCOMPATIBLE MATERIALS ACCIDENT

    International Nuclear Information System (INIS)

    SANDGREN, K.R.

    2006-01-01

    This document quantifies the offsite radiological consequence of the bounding mixing of incompatible materials accident for comparison with the 25 rem Evaluation Guideline established in Appendix A of DOE-STD-3009. The bounding accident is an inadvertent addition of acid to a waste tank. The calculated offsite dose does not challenge the Evaluation Guideline. Revision 4 updates the analysis to consider bulk chemical additions to single shell tanks (SSTs)

  7. Statistically rigorous calculations do not support common input and long-term synchronization of motor-unit firings

    Science.gov (United States)

    Kline, Joshua C.

    2014-01-01

    Over the past four decades, various methods have been implemented to measure synchronization of motor-unit firings. In this work, we provide evidence that prior reports of the existence of universal common inputs to all motoneurons and the presence of long-term synchronization are misleading, because they did not use sufficiently rigorous statistical tests to detect synchronization. We developed a statistically based method (SigMax) for computing synchronization and tested it with data from 17,736 motor-unit pairs containing 1,035,225 firing instances from the first dorsal interosseous and vastus lateralis muscles—a data set one order of magnitude greater than that reported in previous studies. Only firing data, obtained from surface electromyographic signal decomposition with >95% accuracy, were used in the study. The data were not subjectively selected in any manner. Because of the size of our data set and the statistical rigor inherent to SigMax, we have confidence that the synchronization values that we calculated provide an improved estimate of physiologically driven synchronization. Compared with three other commonly used techniques, ours revealed three types of discrepancies that result from failing to use sufficient statistical tests necessary to detect synchronization. 1) On average, the z-score method falsely detected synchronization at 16 separate latencies in each motor-unit pair. 2) The cumulative sum method missed one out of every four synchronization identifications found by SigMax. 3) The common input assumption method identified synchronization from 100% of motor-unit pairs studied. SigMax revealed that only 50% of motor-unit pairs actually manifested synchronization. PMID:25210152

  8. Neutron data error estimate of criticality calculations for lattice in shielding containers with metal fissionable materials

    International Nuclear Information System (INIS)

    Vasil'ev, A.P.; Krepkij, A.S.; Lukin, A.V.; Mikhal'kova, A.G.; Orlov, A.I.; Perezhogin, V.D.; Samojlova, L.Yu.; Sokolov, Yu.A.; Terekhin, V.A.; Chernukhin, Yu.I.

    1991-01-01

    Critical mass experiments were performed using assemblies which simulated one-dimensional lattice consisting of shielding containers with metal fissile materials. Calculations of the criticality of the above assemblies were carried out using the KLAN program with the BAS neutron constants. Errors in the calculations of the criticality for one-, two-, and three-dimensional lattices are estimated. 3 refs.; 1 tab

  9. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  10. Augmented Automated Material Accounting Statistics System (AMASS)

    International Nuclear Information System (INIS)

    Lumb, R.F.; Messinger, M.; Tingey, F.H.

    1983-01-01

    This paper describes an extension of the AMASS methodology which was previously presented at the 1981 INMM annual meeting. The main thrust of the current effort is to develop procedures and a computer program for estimating the variance of an Inventory Difference when many sources of variability, other than measurement error, are admitted in the model. Procedures also are included for the estimation of the variances associated with measurement error estimates and their effect on the estimated limit of error of the inventory difference (LEID). The algorithm for the LEID measurement component uncertainty involves the propagated component measurement variance estimates as well as their associated degrees of freedom. The methodology and supporting computer software is referred to as the augmented Automated Material Accounting Statistics System (AMASS). Specifically, AMASS accommodates five source effects. These are: (1) measurement errors (2) known but unmeasured effects (3) measurement adjustment effects (4) unmeasured process hold-up effects (5) residual process variation A major result of this effort is a procedure for determining the effect of bias correction on LEID, properly taking into account all the covariances that exist. This paper briefly describes the basic models that are assumed; some of the estimation procedures consistent with the model; data requirements, emphasizing availability and other practical considerations; discusses implications for bias corrections; and concludes by briefly describing the supporting computer program

  11. A theoretical and practical clarification on the calculation of reflection loss for microwave absorbing materials

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2018-01-01

    Full Text Available Reflection loss is usually calculated and reported as a function of the thickness of microwave absorption material. However, misleading results are often obtained since the principles imbedded in the popular methods contradict the fundamental facts that electromagnetic waves cannot be reflected in a uniform material except when there is an interface and that there are important differences between the concepts of characteristic impedance and input impedance. In this paper, these inconsistencies have been analyzed theoretically and corrections provided. The problems with the calculations indicate a gap between the background knowledge of material scientists and microwave engineers and for that reason a concise review of transmission line theory is provided along with the mathematical background needed for a deeper understanding of the theory of reflection loss. The expressions of gradient, divergence, Laplacian, and curl operators in a general orthogonal coordinate system have been presented including the concept of reciprocal vectors. Gauss’s and Stokes’s theorems have been related to Green’s theorem in a novel way.

  12. A statistic sensitive to deviations from the zero-loss condition in a sequence of material balances

    International Nuclear Information System (INIS)

    Sellinschegg, D.

    1982-01-01

    The CUMUFR (cumulative sum of standardized MUFresiduals) statistic is proposed to examine materials balance data for deviations from the zero-loss condition. The time series of MUF-residuals is shown to be a linear transformation of the MUF-time series. The MUF-residuals can directly be obtained by applying the transformation or they can be obtained, approximately, by the application of a Kalman filter to estimate the true state of MUF. A modified sequential test with power one is formulated for testing the CUMUFR statistic. The detection capability of the proposed examination procedure is demonstrated by an example, based on Monte Carlo simulations, where the materials balance of the chemical separation process in a reference reprocessing facility is considered. It is shown that abrupt as well as protracted loss patterns are detected with rather high probability when they occur after a zeroloss period

  13. A simple method to estimate the optimum iodine concentration of contrast material through microcatheters: hydrodynamic calculation with spreadsheet software

    International Nuclear Information System (INIS)

    Yamauchi, Teiyu; Hayashi, Toshihiko; Yamada, Takeshi; Futami, Choichiro; Tsukiyama, Yumiko; Harada, Motoko; Furui, Shigeru; Suzuki, Shigeru; Mimura, Kohshiro

    2008-01-01

    It is important to increase the iodine delivery rate (I), that is the iodine concentration of the contrast material (C) x the flow rate of the contrast material (Q), through microcatheters to obtain arteriograms of the highest contrast. It is known that C is an important factor that influences I. The purpose of this study is to establish a method of hydrodynamic calculation of the optimum iodine concentration (i.e., the iodine concentration at which I becomes maximum) of the contrast material and its flow rate through commercially available microcatheters. Iopamidol, ioversol and iohexol of ten iodine concentrations were used. Iodine delivery rates (I meas) of each contrast material through ten microcatheters were measured. The calculated iodine delivery rate (I cal) and calculated optimum iodine concentration (calculated C opt) were obtained with spreadsheet software. The agreement between I cal and I meas was studied by correlation and logarithmic Bland-Altman analyses. The value of the calculated C opt was within the optimum range of iodine concentrations (i.e. the range of iodine concentrations at which I meas becomes 90% or more of the maximum) in all cases. A good correlation between I cal and I meas (I cal = 1.08 I meas, r = 0.99) was observed. Logarithmic Bland-Altman analysis showed that the 95% confidence interval of I cal/I meas was between 0.82 and 1.29. In conclusion, hydrodynamic calculation with spreadsheet software is an accurate, generally applicable and cost-saving method to estimate the value of the optimum iodine concentration and its flow rate through microcatheters

  14. Calculation of releases of radioactive materials in gaseous and liquid effluents from boiling water reactors (BWR-GALE Code)

    International Nuclear Information System (INIS)

    Bangart, R.L.; Bell, L.G.; Boegli, J.S.; Burke, W.C.; Lee, J.Y.; Minns, J.L.; Stoddart, P.G.; Weller, R.A.; Collins, J.T.

    1978-12-01

    The calculational procedures described in the report reflect current NRC staff practice. The methods described will be used in the evaluation of applications for construction permits and operating licenses docketed after January 1, 1979, until this NUREG is revised as a result of additional staff review. The BWR-GALE (Boiling Water Reactor Gaseous and Liquid Effluents) Code is a computerized mathematical model for calculating the release of radioactive material in gaseous and liquid effluents from boiling water reactors (BWRs). The calculations are based on data generated from operating reactors, field tests, laboratory tests, and plant-specific design considerations incorporated to reduce the quantity of radioactive materials that may be released to the environment

  15. Statistical evaluation of low cycle loading curves parameters for structural materials by mechanical characteristics

    International Nuclear Information System (INIS)

    Daunys, Mykolas; Sniuolis, Raimondas

    2006-01-01

    About 300 welded joint materials that are used in nuclear power energy were tested under monotonous tension and low cycle loading in Kaunas University of Technology together with St. Peterburg Central Research Institute of Structural Materials in 1970-2000. The main mechanical, low cycle loading and fracture characteristics of base metals, weld metals and some heat-affected zones of welded joints metals were determined during these experiments. Analytical dependences of low cycle fatigue parameters on mechanical characteristics of structural materials were proposed on the basis of a large number of experimental data, obtained by the same methods and testing equipment. When these dependences are used, expensive low cycle fatigue tests may be omitted and it is possible to compute low cycle loading curves parameters and lifetime for structural materials according to the main mechanical characteristics given in technical manuals. Dependences of low cycle loading curves parameters on mechanical characteristics for several groups of structural materials used in Russian nuclear power energy are obtained by statistical methods and proposed in this paper

  16. Calculation of the collision stopping power of simple and composed materials for fast electrons considering the density effect with the aid of effective material parameters

    International Nuclear Information System (INIS)

    Geske, G.

    1979-01-01

    With the aid of two effective material parameters a simple expression is presented for the Bethe-Bloch-formula for the calculation of the collision stopping power of materials for fast electrons. The formula has been modified in order to include the density effect. The derivation was accomplished in connection with a formalism given by Kim. It was shown that the material dependence on the collision stopping power is entirely comprehended by the density and two effective material parameters. Thus a simple criterion is given for the comparison of materials as to their collision stopping power

  17. Calculation of shipboard fire conditions for radioactive materials packages with the methods of computational fluid dynamics

    International Nuclear Information System (INIS)

    Koski, J.A.; Wix, S.D.; Cole, J.K.

    1997-09-01

    Shipboard fires both in the same ship hold and in an adjacent hold aboard a break-bulk cargo ship are simulated with a commercial finite-volume computational fluid mechanics code. The fire models and modeling techniques are described and discussed. Temperatures and heat fluxes to a simulated materials package are calculated and compared to experimental values. The overall accuracy of the calculations is assessed

  18. Calculating the optical properties of defects and surfaces in wide band gap materials

    Science.gov (United States)

    Deák, Peter

    2018-04-01

    The optical properties of a material critically depend on its defects, and understanding that requires substantial and accurate input from theory. This paper describes recent developments in the electronic structure theory of defects in wide band gap materials, where the standard local or semi-local approximations of density functional theory fail. The success of the HSE06 screened hybrid functional is analyzed in case of Group-IV semiconductors and TiO2, and shown that it is the consequence of error compensation between semi-local and non-local exchange, resulting in a proper derivative discontinuity (reproduction of the band gap) and a total energy which is a linear function of the fractional occupation numbers (removing most of the electron self-interaction). This allows the calculation of electronic transitions with accuracy unseen before, as demonstrated on the single-photon emitter NV(-) center in diamond and on polaronic states in TiO2. Having a reliable tool for electronic structure calculations, theory can contribute to the understanding of complicated cases of light-matter interaction. Two examples are considered here: surface termination effects on the blinking and bleaching of the light-emission of the NV(-) center in diamond, and on the efficiency of photocatalytic water-splitting by TiO2. Finally, an outlook is presented for the application of hybrid functionals in other materials, as, e.g., ZnO, Ga2O3 or CuGaS2.

  19. Statistical aspects of forensic genetics

    DEFF Research Database (Denmark)

    Tvedebrink, Torben

    This PhD thesis deals with statistical models intended for forensic genetics, which is the part of forensic medicine concerned with analysis of DNA evidence from criminal cases together with calculation of alleged paternity and affinity in family reunification cases. The main focus of the thesis...... is on crime cases as these differ from the other types of cases since the biological material often is used for person identification contrary to affinity. Common to all cases, however, is that the DNA is used as evidence in order to assess the probability of observing the biological material given different...... of the DNA evidence under competing hypotheses the biological evidence may be used in the court’s deliberation and trial on equal footing with other evidence and expert statements. These probabilities are based on population genetic models whose assumptions must be validated. The thesis’s first two articles...

  20. Effect of the embolization material in the dose calculation for stereotactic radiosurgery of arteriovenous malformations

    International Nuclear Information System (INIS)

    Galván de la Cruz, Olga Olinca; Lárraga-Gutiérrez, José Manuel; Moreno-Jiménez, Sergio; García-Garduño, Olivia Amanda; Celis, Miguel Angel

    2013-01-01

    It is reported in the literature that the material used in an embolization of an arteriovenous malformation (AVM) can attenuate the radiation beams used in stereotactic radiosurgery (SRS) up to 10% to 15%. The purpose of this work is to assess the dosimetric impact of this attenuating material in the SRS treatment of embolized AVMs, using Monte Carlo simulations assuming clinical conditions. A commercial Monte Carlo dose calculation engine was used to recalculate the dose distribution of 20 AVMs previously planned with a pencil beam dose calculation algorithm. Dose distributions were compared using the following metrics: average, minimal and maximum dose of AVM, and 2D gamma index. The effect in the obliteration rate was investigated using radiobiological models. It was found that the dosimetric impact of the embolization material is less than 1.0 Gy in the prescription dose to the AVM for the 20 cases studied. The impact in the obliteration rate is less than 4.0%. There is reported evidence in the literature that embolized AVMs treated with SRS have low obliteration rates. This work shows that there are dosimetric implications that should be considered in the final treatment decisions for embolized AVMs

  1. Effect of the embolization material in the dose calculation for stereotactic radiosurgery of arteriovenous malformations

    Energy Technology Data Exchange (ETDEWEB)

    Galván de la Cruz, Olga Olinca [Unidad de Radioneurocirugía, Instituto Nacional de Neurología y Neurocirugía (Mexico); Lárraga-Gutiérrez, José Manuel, E-mail: jlarraga@innn.edu.mx [Unidad de Radioneurocirugía, Instituto Nacional de Neurología y Neurocirugía (Mexico); Laboratorio de Física Médica, Instituto Nacional de Neurología y Neurocirugía (Mexico); Moreno-Jiménez, Sergio [Unidad de Radioneurocirugía, Instituto Nacional de Neurología y Neurocirugía (Mexico); García-Garduño, Olivia Amanda [Unidad de Radioneurocirugía, Instituto Nacional de Neurología y Neurocirugía (Mexico); Laboratorio de Física Médica, Instituto Nacional de Neurología y Neurocirugía (Mexico); Celis, Miguel Angel [Unidad de Radioneurocirugía, Instituto Nacional de Neurología y Neurocirugía (Mexico)

    2013-07-01

    It is reported in the literature that the material used in an embolization of an arteriovenous malformation (AVM) can attenuate the radiation beams used in stereotactic radiosurgery (SRS) up to 10% to 15%. The purpose of this work is to assess the dosimetric impact of this attenuating material in the SRS treatment of embolized AVMs, using Monte Carlo simulations assuming clinical conditions. A commercial Monte Carlo dose calculation engine was used to recalculate the dose distribution of 20 AVMs previously planned with a pencil beam dose calculation algorithm. Dose distributions were compared using the following metrics: average, minimal and maximum dose of AVM, and 2D gamma index. The effect in the obliteration rate was investigated using radiobiological models. It was found that the dosimetric impact of the embolization material is less than 1.0 Gy in the prescription dose to the AVM for the 20 cases studied. The impact in the obliteration rate is less than 4.0%. There is reported evidence in the literature that embolized AVMs treated with SRS have low obliteration rates. This work shows that there are dosimetric implications that should be considered in the final treatment decisions for embolized AVMs.

  2. First-Principles Calculations of Electronic, Optical, and Transport Properties of Materials for Energy Applications

    Science.gov (United States)

    Shi, Guangsha

    Solar electricity is a reliable and environmentally friendly method of sustainable energy production and a realistic alternative to conventional fossil fuels. Moreover, thermoelectric energy conversion is a promising technology for solid-state refrigeration and efficient waste-heat recovery. Predicting and optimizing new photovoltaic and thermoelectric materials composed of Earth-abundant elements that exceed the current state of the art, and understanding how nanoscale structuring and ordering improves their energy conversion efficiency pose a challenge for materials scientists. I approach this challenge by developing and applying predictive high-performance computing methods to guide research and development of new materials for energy-conversion applications. Advances in computer-simulation algorithms and high-performance computing resources promise to speed up the development of new compounds with desirable properties and significantly shorten the time delay between the discovery of new materials and their commercial deployment. I present my calculated results on the extraordinary properties of nanostructured semiconductor materials, including strong visible-light absorbance in nanoporous silicon and few-layer SnSe and GeSe. These findings highlight the capability of nanoscale structuring and ordering to improve the performance of Earth-abundant materials compared to their bulk counterparts for solar-cell applications. I also successfully identified the dominant mechanisms contributing to free-carrier absorption in n-type silicon. My findings help evaluate the impact of the energy loss from this absorption mechanism in doped silicon and are thus important for the design of silicon solar cells. In addition, I calculated the thermoelectric transport properties of p-type SnSe, a bulk material with a record thermoelectric figure of merit. I predicted the optimal temperatures and free-carrier concentrations for thermoelectric energy conversion, as well the

  3. Methods of calculation and determination of density and moisture of inhomogeneous materials within capacity of limited dimensions

    International Nuclear Information System (INIS)

    Mukanov, D.M.

    1996-01-01

    Both a definition of optimal sizes and an opinion about representation of assay present practical interest during process of physical characteristics calculation of inhomogeneous materials by neutron method. The opinion about calculation sphere is introduced for definition of necessary dependences. It presents limited by convex surface with center coinciding with center of initial measuring transformer. Sizes of calculation sphere have been defined by physical process character of neutral radiation interaction with measured substance and its nuclear-physical parameters. 3 figs

  4. Beyond quantum microcanonical statistics

    International Nuclear Information System (INIS)

    Fresch, Barbara; Moro, Giorgio J.

    2011-01-01

    Descriptions of molecular systems usually refer to two distinct theoretical frameworks. On the one hand the quantum pure state, i.e., the wavefunction, of an isolated system is determined to calculate molecular properties and their time evolution according to the unitary Schroedinger equation. On the other hand a mixed state, i.e., a statistical density matrix, is the standard formalism to account for thermal equilibrium, as postulated in the microcanonical quantum statistics. In the present paper an alternative treatment relying on a statistical analysis of the possible wavefunctions of an isolated system is presented. In analogy with the classical ergodic theory, the time evolution of the wavefunction determines the probability distribution in the phase space pertaining to an isolated system. However, this alone cannot account for a well defined thermodynamical description of the system in the macroscopic limit, unless a suitable probability distribution for the quantum constants of motion is introduced. We present a workable formalism assuring the emergence of typical values of thermodynamic functions, such as the internal energy and the entropy, in the large size limit of the system. This allows the identification of macroscopic properties independently of the specific realization of the quantum state. A description of material systems in agreement with equilibrium thermodynamics is then derived without constraints on the physical constituents and interactions of the system. Furthermore, the canonical statistics is recovered in all generality for the reduced density matrix of a subsystem.

  5. Basic HIV/AIDS Statistics

    Science.gov (United States)

    ... HIV Syndicated Content Website Feedback HIV/AIDS Basic Statistics Recommend on Facebook Tweet Share Compartir HIV and ... HIV. Interested in learning more about CDC's HIV statistics? Terms, Definitions, and Calculations Used in CDC HIV ...

  6. Monte Carlo method for array criticality calculations

    International Nuclear Information System (INIS)

    Dickinson, D.; Whitesides, G.E.

    1976-01-01

    The Monte Carlo method for solving neutron transport problems consists of mathematically tracing paths of individual neutrons collision by collision until they are lost by absorption or leakage. The fate of the neutron after each collision is determined by the probability distribution functions that are formed from the neutron cross-section data. These distributions are sampled statistically to establish the successive steps in the neutron's path. The resulting data, accumulated from following a large number of batches, are analyzed to give estimates of k/sub eff/ and other collision-related quantities. The use of electronic computers to produce the simulated neutron histories, initiated at Los Alamos Scientific Laboratory, made the use of the Monte Carlo method practical for many applications. In analog Monte Carlo simulation, the calculation follows the physical events of neutron scattering, absorption, and leakage. To increase calculational efficiency, modifications such as the use of statistical weights are introduced. The Monte Carlo method permits the use of a three-dimensional geometry description and a detailed cross-section representation. Some of the problems in using the method are the selection of the spatial distribution for the initial batch, the preparation of the geometry description for complex units, and the calculation of error estimates for region-dependent quantities such as fluxes. The Monte Carlo method is especially appropriate for criticality safety calculations since it permits an accurate representation of interacting units of fissile material. Dissimilar units, units of complex shape, moderators between units, and reflected arrays may be calculated. Monte Carlo results must be correlated with relevant experimental data, and caution must be used to ensure that a representative set of neutron histories is produced

  7. Magnetic materials at finite temperatures: thermodynamics and combined spin and molecular dynamics derived from first principles calculations

    International Nuclear Information System (INIS)

    Eisenbach, Markus; Perera, Meewanage Dilina N.; Landau, David P; Nicholson, Don M.; Yin, Junqi; Brown, Greg

    2015-01-01

    We present a unified approach to describe the combined behavior of the atomic and magnetic degrees of freedom in magnetic materials. Using Monte Carlo simulations directly combined with first principles the Curie temperature can be obtained ab initio in good agreement with experimental values. The large scale constrained first principles calculations have been used to construct effective potentials for both the atomic and magnetic degrees of freedom that allow the unified study of influence of phonon-magnon coupling on the thermodynamics and dynamics of magnetic systems. The MC calculations predict the specific heat of iron in near perfect agreement with experimental results from 300K to above Tc and allow the identification of the importance of the magnon-phonon interaction at the phase-transition. Further Molecular Dynamics and Spin Dynamics calculations elucidate the dynamics of this coupling and open the potential for quantitative and predictive descriptions of dynamic structure factors in magnetic materials using first principles-derived simulations.

  8. Evaluation of radiation shielding performance in sea transport of radioactive material by using simple calculation method

    International Nuclear Information System (INIS)

    Odano, N.; Ohnishi, S.; Sawamura, H.; Tanaka, Y.; Nishimura, K.

    2004-01-01

    A modified code system based on the point kernel method was developed to use in evaluation of shielding performance for maritime transport of radioactive material. For evaluation of shielding performance accurately in the case of accident, it is required to preciously model the structure of transport casks and shipping vessel, and source term. To achieve accurate modelling of the geometry and source term condition, we aimed to develop the code system by using equivalent information regarding structure and source term used in the Monte Carlo calculation code, MCNP. Therefore, adding an option to use point kernel method to the existing Monte Carlo code, MCNP4C, the code system was developed. To verify the developed code system, dose rate distribution in an exclusive shipping vessel to transport the low level radioactive wastes were calculated by the developed code and the calculated results were compared with measurements and Monte Carlo calculations. It was confirmed that the developed simple calculation method can obtain calculation results very quickly with enough accuracy comparing with the Monte Carlo calculation code MCNP4C

  9. Calculations on displacement damage and its related parameters for heavy ion bombardment in reactor materials

    International Nuclear Information System (INIS)

    Sone, Kazuho; Shiraishi, Kensuke

    1975-04-01

    The depth distribution of displacement damage expressed in displacements per atom (DPA) in reactor materials such as Mo, Nb, V, Fe and Ni bombarded by energetic nitrogen, argon and self ions with incident energy below 2 MeV was calculated following the theory developed by Lindhard and co-workers for the partition of energy as an energetic ion slowing down. In this calculation, energy loss due to electron excitation was taken into account for the atomic collision cascade after the primary knock-on process. Some parameters indispensable for the calculation such as energy loss rate, damage efficiency, projected range and its straggling were tabulated as a function of incident ion energy of 20 keV to 2 MeV. The damage and parameters were also calculated for 2 MeV nickel ions bombarding Fe targets. In this case, the DPA value is of 40--75% overestimated in a calculation disregarding electronic energy loss for primary knock-on atoms. The formula proposed in this report is significant for calculations on displacement damage produced by heavy ion bombardment as a simulation of high fluence fast neutron damage. (auth.)

  10. Calculations on displacement damage and its related parameters for heavy ion bombardment in reactor materials

    Energy Technology Data Exchange (ETDEWEB)

    Sone, K; Shiraishi, K

    1975-04-01

    The depth distribution of displacement damage expressed in displacements per atom (DPA) in reactor materials such as Mo, Nb, V, Fe and Ni bombarded by energetic nitrogen, argon and self ions with incident energy below 2 MeV was calculated following the theory developed by Lindhard and co-workers for the partition of energy as an energetic ion slowing down. In this calculation, energy loss due to electron excitation was taken into account for the atomic collision cascade after the primary knock-on process. Some parameters indispensable for the calculation such as energy loss rate, damage efficiency, projected range and its straggling were tabulated as a function of incident ion energy of 20 keV to 2 MeV. The damage and parameters were also calculated for 2 MeV nickel ions bombarding Fe targets. In this case, the DPA value is of 40--75% overestimated in a calculation disregarding electronic energy loss for primary knock-on atoms. The formula proposed in this report is significant for calculations on displacement damage produced by heavy ion bombardment as a simulation of high fluence fast neutron damage.

  11. Compilation of streamflow statistics calculated from daily mean streamflow data collected during water years 1901–2015 for selected U.S. Geological Survey streamgages

    Science.gov (United States)

    Granato, Gregory E.; Ries, Kernell G.; Steeves, Peter A.

    2017-10-16

    Streamflow statistics are needed by decision makers for many planning, management, and design activities. The U.S. Geological Survey (USGS) StreamStats Web application provides convenient access to streamflow statistics for many streamgages by accessing the underlying StreamStatsDB database. In 2016, non-interpretive streamflow statistics were compiled for streamgages located throughout the Nation and stored in StreamStatsDB for use with StreamStats and other applications. Two previously published USGS computer programs that were designed to help calculate streamflow statistics were updated to better support StreamStats as part of this effort. These programs are named “GNWISQ” (Get National Water Information System Streamflow (Q) files), updated to version 1.1.1, and “QSTATS” (Streamflow (Q) Statistics), updated to version 1.1.2.Statistics for 20,438 streamgages that had 1 or more complete years of record during water years 1901 through 2015 were calculated from daily mean streamflow data; 19,415 of these streamgages were within the conterminous United States. About 89 percent of the 20,438 streamgages had 3 or more years of record, and about 65 percent had 10 or more years of record. Drainage areas of the 20,438 streamgages ranged from 0.01 to 1,144,500 square miles. The magnitude of annual average streamflow yields (streamflow per square mile) for these streamgages varied by almost six orders of magnitude, from 0.000029 to 34 cubic feet per second per square mile. About 64 percent of these streamgages did not have any zero-flow days during their available period of record. The 18,122 streamgages with 3 or more years of record were included in the StreamStatsDB compilation so they would be available via the StreamStats interface for user-selected streamgages. All the statistics are available in a USGS ScienceBase data release.

  12. Transient anisotropic magnetic field calculation

    International Nuclear Information System (INIS)

    Jesenik, Marko; Gorican, Viktor; Trlep, Mladen; Hamler, Anton; Stumberger, Bojan

    2006-01-01

    For anisotropic magnetic material, nonlinear magnetic characteristics of the material are described with magnetization curves for different magnetization directions. The paper presents transient finite element calculation of the magnetic field in the anisotropic magnetic material based on the measured magnetization curves for different magnetization directions. For the verification of the calculation method some results of the calculation are compared with the measurement

  13. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  14. Statistical analysis of fatigue crack propagation data of materials from ancient portuguese metallic bridges

    Directory of Open Access Journals (Sweden)

    J A F O. Correia

    2017-10-01

    Full Text Available In Portugal there is a number of old metallic riveted railway and highway bridges that were erected by the end of the 19th century and beginning of the 20th century, and are still in operation, requiring inspections and remediation measures to overcome fatigue damage. Residual fatigue life predictions should be based on actual fatigue data from bridge materials which is scarce due to the material specificities. Fatigue crack propagation data of materials from representative Portuguese riveted bridges, namely the Pinh�o and Luiz I road bridges, the Viana road/railway bridge, the F�o road bridge and the Trez�i railway bridge were considered in this study. The fatigue crack growth rates were correlated using the Pariss law. Also, a statistical analysis of the pure mode I fatigue crack growth (FCG data available for the materials from the ancient riveted metallic bridges is presented. Based on this analysis, design FCG curves are proposed and compared with BS7910 standard proposal, for the Paris region, which is one important fatigue regime concerning the application of the Fracture Mechanics approaches, to predict the remnant fatigue life of structural details

  15. Application of an efficient materials perturbation technique to Monte Carlo photon transport calculations in borehole logging

    International Nuclear Information System (INIS)

    Picton, D.J.; Harris, R.G.; Randle, K.; Weaver, D.R.

    1995-01-01

    This paper describes a simple, accurate and efficient technique for the calculation of materials perturbation effects in Monte Carlo photon transport calculations. It is particularly suited to the application for which it was developed, namely the modelling of a dual detector density tool as used in borehole logging. However, the method would be appropriate to any photon transport calculation in the energy range 0.1 to 2 MeV, in which the predominant processes are Compton scattering and photoelectric absorption. The method enables a single set of particle histories to provide results for an array of configurations in which material densities or compositions vary. It can calculate the effects of small perturbations very accurately, but is by no means restricted to such cases. For the borehole logging application described here the method has been found to be efficient for a moderate range of variation in the bulk density (of the order of ±30% from a reference value) or even larger changes to a limited portion of the system (e.g. a low density mudcake of the order of a few tens of mm in thickness). The effective speed enhancement over an equivalent set of individual calculations is in the region of an order of magnitude or more. Examples of calculations on a dual detector density tool are given. It is demonstrated that the method predicts, to a high degree of accuracy, the variation of detector count rates with formation density, and that good results are also obtained for the effects of mudcake layers. An interesting feature of the results is that relative count rates (the ratios of count rates obtained with different configurations) can usually be determined more accurately than the absolute values of the count rates. (orig.)

  16. Official Statistics and Statistics Education: Bridging the Gap

    Directory of Open Access Journals (Sweden)

    Gal Iddo

    2017-03-01

    Full Text Available This article aims to challenge official statistics providers and statistics educators to ponder on how to help non-specialist adult users of statistics develop those aspects of statistical literacy that pertain to official statistics. We first document the gap in the literature in terms of the conceptual basis and educational materials needed for such an undertaking. We then review skills and competencies that may help adults to make sense of statistical information in areas of importance to society. Based on this review, we identify six elements related to official statistics about which non-specialist adult users should possess knowledge in order to be considered literate in official statistics: (1 the system of official statistics and its work principles; (2 the nature of statistics about society; (3 indicators; (4 statistical techniques and big ideas; (5 research methods and data sources; and (6 awareness and skills for citizens’ access to statistical reports. Based on this ad hoc typology, we discuss directions that official statistics providers, in cooperation with statistics educators, could take in order to (1 advance the conceptualization of skills needed to understand official statistics, and (2 expand educational activities and services, specifically by developing a collaborative digital textbook and a modular online course, to improve public capacity for understanding of official statistics.

  17. Grey radiative transfer in binary statistical media with material temperature coupling: asymptotic limits

    International Nuclear Information System (INIS)

    Prinja, A.K.; Olson, G.L.

    2005-01-01

    Simplified models for the unconditional ensemble-averaged radiation intensity and material energy are developed for radiative transfer in binary statistical media. Asymptotic analysis is used to construct an effective transport model with homogenized opacities in two limits. In the first, the material properties are assumed to have low contrast on average, and is shown to correctly reproduce the well-known atomic mix model in both time-dependent and equilibrium situations. Our analysis successfully resolves an inconsistency previously noted in the literature with the application of the standard definition of the atomic mix limit to radiative transfer in participating random media. In the second limit considered, the materials are assumed to have highly contrasting opacities, yielding a reduced transport model with effective scattering. The existence of these limits requires the mean chunk sizes to be independent of the photon direction and this creates an ambiguity in the interpretation of the models when the underlying stochastic geometry is comprised of alternating one-dimensional slabs. A consistent one-dimensional setting is defined and the asymptotic models are numerically validated over a broad range of physical parameter values

  18. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  19. Equilibrium statistical mechanics

    CERN Document Server

    Mayer, J E

    1968-01-01

    The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t

  20. Calculation of DPA in the Reactor Internal Structural Materials of Nuclear Power Plant

    International Nuclear Information System (INIS)

    Kim, Yong Deong; Lee, Hwan Soo

    2014-01-01

    The embrittlement is mainly caused by atomic displacement damage due to irradiations with neutrons, especially fast neutrons. The integrity of the reactor internal structural materials has to be ensured over the reactor life time, threatened by the irradiation induced displacement damage. Accurate modeling and prediction of the displacement damage is a first step to evaluate the integrity of the reactor internal structural materials. Traditional approaches for analyzing the displacement damage of the materials have relied on tradition model, developed initially for simple metals, Kinchin and Pease (K-P), and the standard formulation of it by Norgett et al. , often referred to as the 'NRT' model. An alternative and complementary strategy for calculating the displacement damage is to use MCNP code. MCNP uses detailed physics and continuous-energy cross-section data in its simulations. In this paper, we have performed the evaluation of the displacement damage of the reactor internal structural materials in Kori NPP unit 1 using detailed Monte Carlo modeling and compared with predictions results of displacement damage using the classical NRT model. The evaluation of the displacement damage of the reactor internal structural materials in Kori NPP unit 1 using detailed Monte Carlo modeling has been performed. The maximum value of the DPA rate was occurred at the baffle side of the reactor internal where the node has the maximum neutron flux

  1. A mathematical model and an approximate method for calculating the fracture characteristics of nonmetallic materials during laser cutting

    Energy Technology Data Exchange (ETDEWEB)

    Smorodin, F.K.; Druzhinin, G.V.

    1991-01-01

    A mathematical model is proposed which describes the fracture behavior of amorphous materials during laser cutting. The model, which is based on boundary layer equations, is reduced to ordinary differential equations with the corresponding boundary conditions. The reduced model is used to develop an approximate method for calculating the fracture characteristics of nonmetallic materials.

  2. First principles calculation of material properties of group IV elements and III-V compounds

    Science.gov (United States)

    Malone, Brad Dean

    This thesis presents first principles calculations on the properties of group IV elements and group III-V compounds. It includes investigations into what structure a material is likely to form in, and given that structure, what are its electronic, optical, and lattice dynamical properties as well as what are the properties of defects that might be introduced into the sample. The thesis is divided as follows: • Chapter 1 contains some of the conceptual foundations used in the present work. These involve the major approximations which allow us to approach the problem of systems with huge numbers of interacting electrons and atomic cores. • Then, in Chapter 2, we discuss one of the major limitations to the DFT formalism introduced in Chapter 1, namely its inability to predict the quasiparticle spectra of materials and in particular the band gap of a semiconductor. We introduce a Green's function approach to the electron self-energy Sigma known as the GW approximation and use it to compute the quasiparticle band structures of a number of group IV and III-V semiconductors. • In Chapter 3 we present a first-principles study of a number of high-pressure metastable phases of Si with tetrahedral bonding. The phases studied include all experimentally determined phases that result from decompression from the metallic beta-Sn phase, specifically the BC8 (Si-III), hexagonal diamond (Si-IV), and R8 (Si-XII). In addition to these, we also study the hypothetical ST12 structure found upon decompression from beta-Sn in germanium. • Our attention is then turned to the first principles calculations of optical properties in Chapter 4. The Bethe-Salpeter equation is then solved to obtain the optical spectrum of this material including electron-hole interactions. The calculated optical spectrum is compared with experimental data for other forms of silicon commonly used in photovoltaic devices, namely the cubic, polycrystalline, and amorphous forms. • In Chapter 5 we present

  3. GW Calculations of Materials on the Intel Xeon-Phi Architecture

    Science.gov (United States)

    Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Biller, Ariel; Chelikowsky, James R.; Louie, Steven G.

    Intel Xeon-Phi processors are expected to power a large number of High-Performance Computing (HPC) systems around the United States and the world in the near future. We evaluate the ability of GW and pre-requisite Density Functional Theory (DFT) calculations for materials on utilizing the Xeon-Phi architecture. We describe the optimization process and performance improvements achieved. We find that the GW method, like other higher level Many-Body methods beyond standard local/semilocal approximations to Kohn-Sham DFT, is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-waves, band-pairs and frequencies. Support provided by the SCIDAC program, Department of Energy, Office of Science, Advanced Scientic Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-AC02-05CH11231 (LBNL).

  4. The DC electrical conductivity calculation purely from the dissipative component of the AC conductivity III. statistical ensemble inherent to state with DC current

    International Nuclear Information System (INIS)

    Milinski, N.; Milinski, E.

    2002-01-01

    Amorphous conductors such as liquid metals and alloys are subject to dc conductivity σ calculation here. Principal aim is to explore the impact on σ of the constitutive equation α * = 1, formulated and developed in the preceding papers. The nearly free electrons (NFE) model has been applied. Alkali metals are assumed to fit this model well, and sodium the best. Consequently, the results on these metals have been assumed reliable and relevant for conclusions making. The conclusion we made is: instead of the Fermi radius k f proper for the statistical ensemble in state of thermodynamics equilibrium, a new k ' f number is needed to be introduced into the linear response formula when calculating σ and α * . This k ' f is the length of the corresponding axis of ellipsoid proper for describing the statistical ensemble in the state with dc current. In the traditional interpretation of the linear response formula (Kubo formula) this conversion has been overlooked. Parameters of the mentioned ellipsoids are determined in this paper for a number of liquid metals of valency numbers 1,2,3,4, in addition to a selection of some binary and ternary conducting alloys. It is up to experimental measurements to decide how real this concept of restructuring the statistical ensemble is. (Authors)

  5. International report to validate criticality safety calculations for fissile material transport

    International Nuclear Information System (INIS)

    Whitesides, G.E.

    1984-01-01

    During the past three years a Working Group established by the Organization for Economic Co-operation and Development's Nuclear Energy Agency (OECD-NEA) in Paris, France, has been studying the validity and applicability of a variety of criticality safety computer programs and their associated nuclear data for the computation of the neutron multiplication factor, k/sub eff/, for various transport packages used in the fuel cycle. The principal objective of this work has been to provide an internationally acceptable basis for the licensing authorities in a country to honor licensing approvals granted by other participating countries. Eleven countries participated in the initial study which consisted of examining criticality safety calculations for packages designed for spent light water reactor fuel transport. This paper presents a summary of this study which has been completed and reported in an OECD-NEA Report No. CSNI-71. The basic goal of this study was to outline a satisfactory validation procedure for this particular application. First, a set of actual critical experiments were chosen which contained the various material and geometric properties present in typical LWR transport containers. Secondly, calculations were made by each of the methods in order to determine how accurately each method reproduced the experimental values. This successful effort in developing a benchmark procedure for validating criticality calculations for spent LWR transport packages along with the successful intercomparison of a number of methods should provide increased confidence by licensing authorities in the use of these methods for this area of application. 4 references, 2 figures

  6. Standard Practice for Calculation of Photometric Transmittance and Reflectance of Materials to Solar Radiation

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1988-01-01

    1.1 This practice describes the calculation of luminous (photometric) transmittance and reflectance of materials from spectral radiant transmittance and reflectance data obtained from Test Method E 903. 1.2 Determination of luminous transmittance by this practice is preferred over measurement of photometric transmittance by methods using the sun as a source and a photometer as detector except for transmitting sheet materials that are inhomogeneous, patterned, or corrugated. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  7. Statistical equilibrium calculations for silicon in early-type model stellar atmospheres

    International Nuclear Information System (INIS)

    Kamp, L.W.

    1976-02-01

    Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of the range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0--B5, luminosity classes III, IV, and V

  8. High-Throughput Screening of Sulfide Thermoelectric Materials Using Electron Transport Calculations with OpenMX and BoltzTraP

    Science.gov (United States)

    Miyata, Masanobu; Ozaki, Taisuke; Takeuchi, Tsunehiro; Nishino, Shunsuke; Inukai, Manabu; Koyano, Mikio

    2018-06-01

    The electron transport properties of 809 sulfides have been investigated using density functional theory (DFT) calculations in the relaxation time approximation, and a material design rule established for high-performance sulfide thermoelectric (TE) materials. Benchmark electron transport calculations were performed for Cu12Sb4S13 and Cu26V2Ge6S32, revealing that the ratio of the scattering probability of electrons and phonons ( κ lat τ el -1 ) was constant at about 2 × 1014 W K-1 m-1 s-1. The calculated thermopower S dependence of the theoretical dimensionless figure of merit ZT DFT of the 809 sulfides showed a maximum at 140 μV K-1 to 170 μV K-1. Under the assumption of constant κ lat τ el -1 of 2 × 1014 W K-1 m-1 s-1 and constant group velocity v of electrons, a slope of the density of states of 8.6 states eV-2 to 10 states eV-2 is suitable for high- ZT sulfide TE materials. The Lorenz number L dependence of ZT DFT for the 809 sulfides showed a maximum at L of approximately 2.45 × 10-8 V2 K-2. This result demonstrates that the potential of high- ZT sulfide materials is highest when the electron thermal conductivity κ el of the symmetric band is equal to that of the asymmetric band.

  9. Nuclear medicine statistics

    International Nuclear Information System (INIS)

    Martin, P.M.

    1977-01-01

    Numerical description of medical and biologic phenomena is proliferating. Laboratory studies on patients now yield measurements of at least a dozen indices, each with its own normal limits. Within nuclear medicine, numerical analysis as well as numerical measurement and the use of computers are becoming more common. While the digital computer has proved to be a valuable tool for measurment and analysis of imaging and radioimmunoassay data, it has created more work in that users now ask for more detailed calculations and for indices that measure the reliability of quantified observations. The following material is presented with the intention of providing a straight-forward methodology to determine values for some useful parameters and to estimate the errors involved. The process used is that of asking relevant questions and then providing answers by illustrations. It is hoped that this will help the reader avoid an error of the third kind, that is, the error of statistical misrepresentation or inadvertent deception. This occurs most frequently in cases where the right answer is found to the wrong question. The purposes of this chapter are: (1) to provide some relevant statistical theory, using a terminology suitable for the nuclear medicine field; (2) to demonstrate the application of a number of statistical methods to the kinds of data commonly encountered in nuclear medicine; (3) to provide a framework to assist the experimenter in choosing the method and the questions most suitable for the experiment at hand; and (4) to present a simple approach for a quantitative quality control program for scintillation cameras and other radiation detectors

  10. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  11. ``Phantom'' Modes in Ab Initio Tunneling Calculations: Implications for Theoretical Materials Optimization, Tunneling, and Transport

    Science.gov (United States)

    Barabash, Sergey V.; Pramanik, Dipankar

    2015-03-01

    Development of low-leakage dielectrics for semiconductor industry, together with many other areas of academic and industrial research, increasingly rely upon ab initio tunneling and transport calculations. Complex band structure (CBS) is a powerful formalism to establish the nature of tunneling modes, providing both a deeper understanding and a guided optimization of materials, with practical applications ranging from screening candidate dielectrics for lowest ``ultimate leakage'' to identifying charge-neutrality levels and Fermi level pinning. We demonstrate that CBS is prone to a particular type of spurious ``phantom'' solution, previously deemed true but irrelevant because of a very fast decay. We demonstrate that (i) in complex materials, phantom modes may exhibit very slow decay (appearing as leading tunneling terms implying qualitative and huge quantitative errors), (ii) the phantom modes are spurious, (iii) unlike the pseudopotential ``ghost'' states, phantoms are an apparently unavoidable artifact of large numerical basis sets, (iv) a presumed increase in computational accuracy increases the number of phantoms, effectively corrupting the CBS results despite the higher accuracy achieved in resolving the true CBS modes and the real band structure, and (v) the phantom modes cannot be easily separated from the true CBS modes. We discuss implications for direct transport calculations. The strategy for dealing with the phantom states is discussed in the context of optimizing high-quality high- κ dielectric materials for decreased tunneling leakage.

  12. A statistical method for predicting sound absorbing property of porous metal materials by using quartet structure generation set

    International Nuclear Information System (INIS)

    Guan, Dong; Wu, Jiu Hui; Jing, Li

    2015-01-01

    Highlights: • A random internal morphology and structure generation-growth method, termed as the quartet structure generation set (QSGS), has been utilized based on the stochastic cluster growth theory for numerical generating the various microstructures of porous metal materials. • Effects of different parameters such as thickness and porosity on sound absorption performance of the generated structures are studied by the present method, and the obtained results are validated by an empirical model as well. • This method could be utilized to guide the design and fabrication of the sound-absorption porous metal materials. - Abstract: In this paper, a statistical method for predicting sound absorption properties of porous metal materials is presented. To reflect the stochastic distribution characteristics of the porous metal materials, a random internal morphology and structure generation-growth method, termed as the quartet structure generation set (QSGS), has been utilized based on the stochastic cluster growth theory for numerical generating the various microstructures of porous metal materials. Then by using the transfer-function approach along with the QSGS tool, we investigate the sound absorbing performance of porous metal materials with complex stochastic geometries. The statistical method has been validated by the good agreement among the numerical results for metal rubber from this method and a previous empirical model and the corresponding experimental data. Furthermore, the effects of different parameters such as thickness and porosity on sound absorption performance of the generated structures are studied by the present method, and the obtained results are validated by an empirical model as well. Therefore, the present method is a reliable and robust method for predicting the sound absorption performance of porous metal materials, and could be utilized to guide the design and fabrication of the sound-absorption porous metal materials

  13. The Risoe model for calculating the consequences of the release of radioactive material to the atmosphere

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.

    1980-07-01

    A brief description is given of the model used at Risoe for calculating the consequences of releases of radioactive material to the atmosphere. The model is based on the Gaussian plume model, and it provides possibilities for calculation of: doses to individuals, collective doses, contamination of the ground, probability distribution of doses, and the consequences of doses for give dose-risk relationships. The model is implemented as a computer program PLUCON2, written in ALGOL for the Burroughs B6700 computer at Risoe. A short description of PLUCON2 is given. (author)

  14. Semiconductor statistics

    CERN Document Server

    Blakemore, J S

    1962-01-01

    Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

  15. Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.

    Science.gov (United States)

    Potter, Christine E; Wang, Tianlin; Saffran, Jenny R

    2017-04-01

    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.

  16. Risks of transport of radioactive materials on the road; some exploring calculations performed with the INTERTRAN-model

    International Nuclear Information System (INIS)

    1987-04-01

    Under the auspices of the IAEA a computercode, named INTERTRAN, has been developed in order to calculate the risks of the transport of radioactive materials. This code has to be tested nearer. For the Dutch situation a number of calculations has been performed of more or less realistic cases in which four transport streams have been investigated. Two transport routes are chosen. The risks thus obtained are compared quantitatively with the risks of LPG-transports. 4 refs.; 9 figs

  17. Average wind statistics for SRP area meteorological towers

    International Nuclear Information System (INIS)

    Laurinat, J.E.

    1987-01-01

    A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics

  18. A statistical calculation of the β- strength function

    International Nuclear Information System (INIS)

    Arvieu, R.; Haq, R.U.; Touchard, J.

    1976-01-01

    A microscopic calculation of the Gamow-Teller strength between the 0 + ground state of 208 Pb and the 1 + particle-hole states of 208 Bi assuming the particle-hole matrix elements as random numbers with some specified distribution, is described. Under certain conditions for the two-body matrix elements, a G.T. resonance occurs. The stability of this collective state along with the accompanying low energy β - -strength tail is studied for various samples of p-h matrix elements [fr

  19. Statistical elements in calculations procedures for air quality control; Elementi di statistica nelle procedure di calcolo per il controllo della qualita' dell'aria

    Energy Technology Data Exchange (ETDEWEB)

    Mura, M.C. [Istituto Superiore di Sanita' , Laboratorio di Igiene Ambientale, Rome (Italy)

    2001-07-01

    The statistical processing of data resulting from the monitoring of chemical atmospheric pollution aimed at air quality control is presented. The form of procedural models may offer a practical instrument to the operators in the sector. The procedural models are modular and can be easily integrated with other models. They include elementary calculation procedures and mathematical methods for statistical analysis. The calculation elements have been developed by probabilistic induction so as to relate them to the statistical analysis. The calculation elements have been developed by probabilistic induction so as to relate them to the statistical models, which are the basis of the methods used for the study and the forecast of atmospheric pollution. This report is part of the updating and training activity that the Istituto Superiore di Sanita' has been carrying on for over twenty years, addressed to operators of the environmental field. [Italian] Il processo di elaborazione statistica dei dati provenienti dal monitoraggio dell'inquinamento chimico dell'atmosfera, finalizzato al controllo della qualita' dell'aria, e' presentato in modelli di procedure al fine di fornire un sintetico strumento di lavoro agli operatori del settore. I modelli di procedure sono modulari ed integrabili. Includono gli elementi di calcolo elementare ed i metodi statistici d'analisi. Gli elementi di calcolo sono sviluppati con metodo d'induzione probabilistica per collegarli ai modelli statistici, che sono alla base dei metodi d'analisi nello studio del fenomeno dell'inquinamento atmosferico anche a fini previsionali. Il rapporto si inserisce nell'attivita' di aggiornamento e di formazione che fin dagli anni ottanta l'Istituto Superiore di Sanita' indirizza agli operatori del settore ambientale.

  20. Applied statistics in ecology: common pitfalls and simple solutions

    Science.gov (United States)

    E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick

    2013-01-01

    The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...

  1. Comment on ''Walker diffusion method for calculation of transport properties of composite materials''

    International Nuclear Information System (INIS)

    Kim, In Chan; Cule, Dinko; Torquato, Salvatore

    2000-01-01

    In a recent paper [C. DeW. Van Siclen, Phys. Rev. E 59, 2804 (1999)], a random-walk algorithm was proposed as the best method to calculate transport properties of composite materials. It was claimed that the method is applicable both to discrete and continuum systems. The limitations of the proposed algorithm are analyzed. We show that the algorithm does not capture the peculiarities of continuum systems (e.g., ''necks'' or ''choke points'') and we argue that it is the stochastic analog of the finite-difference method. (c) 2000 The American Physical Society

  2. Statistical Analysis Of Tank 19F Floor Sample Results

    International Nuclear Information System (INIS)

    Harris, S.

    2010-01-01

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  3. Costs for insurance of civil responsibility for nuclear damage during transportation of nuclear materials

    International Nuclear Information System (INIS)

    Amelina, M.E.; Arsent'ev, S.V.; Molchanov, A.S.

    2009-01-01

    The article considers the method of calculation of rates for insurance of civil responsibility for nuclear damage during transportation of nuclear materials, which can minimize the insurer's costs for this type of insurance in situation when there is no statistics available and it is not possible to calculate the insurance rate by the traditional means using the probability theory

  4. Calculation of neutron cross sections on iron up to 40 MeV

    International Nuclear Information System (INIS)

    Arthur, E.D.; Young, P.G.

    1980-01-01

    The development of high energy d + Li neutron sources for fusion materials radiation damage studies will require neutron cross sections up to 40 MeV. Experimental data above 15 MeV are generally sparse or nonexistent, and reliance must be placed upon nuclear-model calculations to produce the needed cross sections. To satisfy such requirements for the Fusion Materials Irradiation Test Facility (FMIT), neutron cross sections have been calculated for 54 56 Fe between 3 and 40 MeV. These results were joined to the existing ENDF/B-V evaluation below 3 MeV. In this energy range, most neutron reactions can be described using the Hauser-Feshbach statistical model with corrections for preequilibrium and direct-reaction effects. To properly use these models to obtain realistic cross sections, emphasis must be placed upon the determination of suitable input parameters (optical model sets, gamma-ray strength functions, level densities) valid over the energy range of the calculation. To do this, several types of independent data were used to arrive at consistent parameter sets as described

  5. A simple method for calculation of the hydrogen diffusion in composite materials

    International Nuclear Information System (INIS)

    Paraschiv, M.C.; Paraschiv, A.; Grecu, V. V.

    2008-01-01

    A method for calculating the diffusion of various chemical species in composite materials when the material compounds can not be described as a function of the position coordinate in every point has been proposed. The method can be applied only for such systems in which a quasi-continuous presence of every component can be defined in every arbitrary region. Since the complete random distribution of the boundaries between the components will influence the diffusion process, the continuity equation associated to the diffusion problem was extended for arbitrary volumes that keep the volume concentration of every component of the alloy as the entire material volume. Its consistency with the Fick's second law was also proved. To visualise the differences of hydrogen migration in a thermal gradient inside the TRIGA fuels, arising as a result of increasing the uranium content from ∼ 10% wt. U to ∼ 45% wt. U in the TRIGA U-ZrH δ alloy, the method has been applied for the two concentrations of uranium. To this aim, the assumption that the rate-controlling parameter of hydrogen diffusion is the dissociation equilibrium pressure of hydrogen in zirconium hydride has been used. The results show significant differences of both hydrogen distribution and the kinetics of hydrogen migration in a thermal gradient for the two cases analysed. (authors)

  6. Thermodynamic Calculations of Ternary Polyalcohol and Amine Phase Diagrams for Thermal Energy Storage Materials

    Science.gov (United States)

    Shi, Renhai

    Organic polyalcohol and amine globular molecular crystal materials as phase change materials (PCMs) such as Pentaglycerine (PG-(CH3)C(CH 2OH)3), Tris(hydroxymethyl)aminomethane (TRIS-(NH2)C(CH 2OH)3), 2-amino-2methyl-1,3-propanediol (AMPL-(NH2)(CH3)C(CH2OH)2), and neopentylglycol (NPG-(CH3)2C(CH2OH) 2) can be considered to be potential candidates for thermal energy storage (TES) applications such as waste heat recovery, solar energy utilization, energy saving in buildings, and electronic device management during heating or cooling process in which the latent heat and sensible heat can be reversibly stored or released through solid state phase transitions over a range of temperatures. In order to understand the polymorphism of phase transition of these organic materials and provide more choice of materials design for TES, binary systems have been studied to lower the temperature of solid-state phase transition for the specific application. To our best knowledge, the study of ternary systems in these organic materials is limited. Based on this motivation, four ternary systems of PG-TRIS-AMPL, PG-TRIS-NPG, PG-AMPL-NPG, and TRIS-AMPL-NPG are proposed in this dissertation. Firstly, thermodynamic assessment with CALPHAD method is used to construct the Gibbs energy functions into thermodynamic database for these four materials based on available experimental results from X-Ray Diffraction (XRD) and Differential Scanning Calorimetry (DSC). The phase stability and thermodynamic characteristics of these four materials calculated from present thermodynamic database with CALPHAD method can match well the present experimental results from XRD and DSC. Secondly, related six binary phase diagrams of PG-TRIS, PG-AMPL, PG-NPG, TRIS-AMPL, TRIS-NPG, and AMPL-NPG are optimized with CALPHAD method in Thermo-Calc software based on available experimental results, in which the substitutional model is used and excess Gibbs energy is expressed with Redlich-Kister formalism. The

  7. Influence of mild hyperglycemia on cerebral FDG distribution patterns calculated by statistical parametric mapping

    International Nuclear Information System (INIS)

    Kawasaki, Keiichi; Ishii, Kenji; Saito, Yoko; Oda, Keiichi; Kimura, Yuichi; Ishiwata, Kiichi

    2008-01-01

    In clinical cerebral 2-[ 18 F]fluoro-2-deoxy-D-glucose positron emission tomography (FDG-PET) studies, we sometimes encounter hyperglycemic patients with diabetes mellitus or patients who have not adhered to the fasting requirement. The objective of this study was to investigate the influence of mild hyperglycemia (plasma glucose range 110-160 mg/dl) on the cerebral FDG distribution patterns calculated by statistical parametric mapping (SPM). We studied 19 healthy subjects (mean age 66.2 years). First, all the subjects underwent FDG-PET scans in the fasting condition. Then, 9 of the 19 subjects (mean age 64.3 years) underwent the second FDG-PET scans in the mild hyperglycemic condition. The alterations in the FDG-PET scans were investigated using SPM- and region of interest (ROI)-based analyses. We used three reference regions: SPM global brain (SPMgb) used for SPM global mean calculation, the gray and white matter region computed from magnetic resonance image (MRIgw), and the cerebellar cortex (Cbll). The FDG uptake calculated as the standardized uptake value (average) in SPMgb, MRIgw, and Cbll regions in the mild hyperglycemic condition was 42.7%, 41.3%, and 40.0%, respectively, of that observed in the fasting condition. In SPM analysis, the mild hyperglycemia was found to affect the cerebral distribution patterns of FDG. The FDG uptake was relatively decreased in the gray matter, mainly in the frontal, temporal, and parietal association cortices, posterior cingulate, and precuneus in both SPMgb- and MRIgw-reference-based analyses. When Cbll was adopted as the reference region, those decrease patterns disappeared. The FDG uptake was relatively increased in the white matter, mainly in the centrum semiovale in all the reference-based analyses. It is noteworthy that the FDG distribution patterns were altered under mild hyperglycemia in SPM analysis. The decreased uptake patterns in SPMgb- (SPM default) and MRIgw-reference-based analyses resembled those observed in

  8. Statistical mechanics

    CERN Document Server

    Davidson, Norman

    2003-01-01

    Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody

  9. Moderating ratio parameter evaluation for different materials by means of Monte Carlo calculations and reactivity direct measurements

    International Nuclear Information System (INIS)

    Borio, A.; Cagnazzo, M.; Marchetti, F.; Pappalardo, P.; Salvini, A.

    2004-01-01

    The aim of this work is to determine moderating properties of different materials (water, graphite, perfluoropolyethers), in particular the slowing down power (SDP) and the moderating ratio (MR), defined as SDP =ξΣ S and MR=ξΣ S /Σ A , where Σ S and Σ A represent the macroscopic scattering and absorption cross section, respectively, and ξ is the average logarithmic energy loss per collision. Slowing-down power indicates how rapidly a neutron will slow down in the material, but it does not fully explain the effectiveness of the material as a moderator. In fact, a material can slow down neutrons with high efficiency because of its big Σ S , but it can be a poor moderator because with high probability it also absorbs neutrons. Thus, the most complete measure of the effectiveness of a moderator is the moderating ratio parameter which takes into account also the absorption effects: the bigger is the moderating ratio values, the more effectively the material performs as a moderator. The first part of the work consisted in the comparison between the SDP and MR parameter evaluated for different materials by means of Monte Carlo simulations and by means of calculations based on their definition formula (they are developed from knowledge of material composition and of microscopic cross section σ i (derived from literature)). It was found that this comparison showed a good agreement with errors less than 10 %. Thus the Monte Carlo code seems to be a good support for the calculation of the moderating parameters, particularly useful when the materials are compounds of many elements. The second part of the work was dedicated to correlate the materials' MR values with the measured variation of reactivity induced by the insertion of the materials in the core of TRIGA Mark II reactor of the University of Pavia. This is possible by definition of a new parameter for the measure. This parameter, named S, depends on the total weight of the sample inserted in the reactor core

  10. Assessment of natural radioactivity and radiological hazards in building materials used in the Tiruvannamalai District, Tamilnadu, India, using a statistical approach

    Directory of Open Access Journals (Sweden)

    Y. Raghu

    2017-07-01

    Full Text Available One-hundred-fifty-one samples of six types of building materials were collected from different locations of the Tiruvannamalai District, Tamilnadu, and were analyzed using a gamma ray spectroscopy system. From the results, the highest values observed in the specific activities of 226Ra, 232Th and 40K were 116.1 (soil 106.67 (sand and 527.533 (tiles in Bq kg−1, while the lowest values observed in the specific activities of the same radionuclides were 35.73, 37.75 and 159.83 for cement in Bq kg−1, respectively. The potential radiological hazards were assessed by calculating the radium equivalent activity (Raeq, the indoor absorbed gamma dose rate (DR, the annual effective dose rate (HR, the activity utilization index (I, the alpha index (Iα, the gamma index (Iγ, and the external hazard (Hex and internal hazard (Hin indices. The estimated mean value of the absorbed dose rate of 148.35 nGy h−1 is slightly higher than the world average value of 84 nGy h−1, and the annual effective dose in the studied samples is 0.1824 mSv y−1, which is lower than the recommended limit. Multivariate statistical methods are applied to determine the existing relationship between radionuclides and radiological health hazard parameters and to identify the maximum contribution of radionuclide in radioactivity. The values of the hazard indices were below the recommended levels; therefore, it is concluded that the buildings constructed from such materials are safe for the inhabitants. The findings from this research will be useful to assess the radiation hazards of building materials in humans.

  11. In vivo Comet assay--statistical analysis and power calculations of mice testicular cells.

    Science.gov (United States)

    Hansen, Merete Kjær; Sharma, Anoop Kumar; Dybdahl, Marianne; Boberg, Julie; Kulahci, Murat

    2014-11-01

    The in vivo Comet assay is a sensitive method for evaluating DNA damage. A recurrent concern is how to analyze the data appropriately and efficiently. A popular approach is to summarize the raw data into a summary statistic prior to the statistical analysis. However, consensus on which summary statistic to use has yet to be reached. Another important consideration concerns the assessment of proper sample sizes in the design of Comet assay studies. This study aims to identify a statistic suitably summarizing the % tail DNA of mice testicular samples in Comet assay studies. A second aim is to provide curves for this statistic outlining the number of animals and gels to use. The current study was based on 11 compounds administered via oral gavage in three doses to male mice: CAS no. 110-26-9, CAS no. 512-56-1, CAS no. 111873-33-7, CAS no. 79-94-7, CAS no. 115-96-8, CAS no. 598-55-0, CAS no. 636-97-5, CAS no. 85-28-9, CAS no. 13674-87-8, CAS no. 43100-38-5 and CAS no. 60965-26-6. Testicular cells were examined using the alkaline version of the Comet assay and the DNA damage was quantified as % tail DNA using a fully automatic scoring system. From the raw data 23 summary statistics were examined. A linear mixed-effects model was fitted to the summarized data and the estimated variance components were used to generate power curves as a function of sample size. The statistic that most appropriately summarized the within-sample distributions was the median of the log-transformed data, as it most consistently conformed to the assumptions of the statistical model. Power curves for 1.5-, 2-, and 2.5-fold changes of the highest dose group compared to the control group when 50 and 100 cells were scored per gel are provided to aid in the design of future Comet assay studies on testicular cells. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. About the use of approximations, which ensure materials mass balance conservation by spatial meshes, in Sn full core calculations

    International Nuclear Information System (INIS)

    Voloshchenko, A.M.; Russkov, A.A.; Gurevich, M.I.; Olejnik, D.S.

    2008-01-01

    One analyzes a possibility to make use of the geometry approximations conserving the materials mass local balance in every mesh via adding of mixtures in the meshes containing several feed materials to perform the kinetic calculation of the reactor core neutron fields. To set the 3D-geometry of the reactor core one makes use of the combinatorial geometry methods implemented in the MCI Program to solve the diffusivity equations by the Monte Carlo method, to convert the combinatorial prescribing of the geometry into the mesh representation - the ray tracing method. According to the calculations of the WWER-1000 reactor core and the simulations of the spent fuel storage facility, the described procedure compares favorably with the conventional geometry approximations [ru

  13. Investigation of metal/carbon-related materials for fuel cell applications by electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Ki-jeong [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of)]. E-mail: kong@krict.re.kr; Choi, Youngmin [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of); Ryu, Beyong-Hwan [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of); Lee, Jeong-O [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of); Chang, Hyunju [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of)

    2006-07-15

    The potential of carbon-related materials, such as carbon nanotubes (CNTs) and graphite nanofibers (GNFs), supported metal catalysts as an electrode for fuel cell application was investigated using the first-principle electronic structure calculations. The stable binding geometries and energies of metal catalysts are determined on the CNT surface and the GNF edge. The catalyst metal is more tightly bound to the GNF edge than to the CNT surface because of the existence of active dangling bonds of edge carbon atoms. The diffusion barrier of metal atoms on the surface and edge is also obtained. From our calculation results, we have found that high dispersity is achievable for GNF due to high barrier against the diffusion of metal atoms, while CNT appears less suitable. The GNF with a large edge-to-wall ratio is more suitable for the high-performance electrode than perfect crystalline graphite or CNT.

  14. Investigation of metal/carbon-related materials for fuel cell applications by electronic structure calculations

    International Nuclear Information System (INIS)

    Kong, Ki-jeong; Choi, Youngmin; Ryu, Beyong-Hwan; Lee, Jeong-O; Chang, Hyunju

    2006-01-01

    The potential of carbon-related materials, such as carbon nanotubes (CNTs) and graphite nanofibers (GNFs), supported metal catalysts as an electrode for fuel cell application was investigated using the first-principle electronic structure calculations. The stable binding geometries and energies of metal catalysts are determined on the CNT surface and the GNF edge. The catalyst metal is more tightly bound to the GNF edge than to the CNT surface because of the existence of active dangling bonds of edge carbon atoms. The diffusion barrier of metal atoms on the surface and edge is also obtained. From our calculation results, we have found that high dispersity is achievable for GNF due to high barrier against the diffusion of metal atoms, while CNT appears less suitable. The GNF with a large edge-to-wall ratio is more suitable for the high-performance electrode than perfect crystalline graphite or CNT

  15. Innovations in Statistical Observations of Consumer Prices

    Directory of Open Access Journals (Sweden)

    Olga Stepanovna Oleynik

    2016-10-01

    Full Text Available This article analyzes the innovative changes in the methodology of statistical surveys of consumer prices. These changes are reflected in the “Official statistical methodology for the organization of statistical observation of consumer prices for goods and services and the calculation of the consumer price index”, approved by order of the Federal State Statistics Service of December 30, 2014 no. 734. The essence of innovation is the use of mathematical methods in determining the range of studies objects of trade and services, in calculating the sufficient observable price quotes based on price dispersion, the proportion of the observed product (service, a representative of consumer spending, as well as the indicator of the complexity of price registration. The authors analyzed the mathematical calculations of the required number of quotations for observation in the Volgograd region in 2016, the results of calculations are compared with the number of quotes included in the monitoring. The authors believe that the implementation of these mathematical models allowed to substantially reduce the influence of the subjective factor in the organization of monitoring of consumer prices, and therefore to increase the objectivity of the resulting statistics on consumer prices and inflation. At the same time, the proposed methodology needs further improvement in terms of payment for goods, products (services by representatives having a minor share in consumer expenditure.

  16. Explanation of the methods employed in the statistical evaluation of SALE program data

    International Nuclear Information System (INIS)

    Bracey, J.T.; Soriano, M.

    1981-01-01

    The analysis of Safeguards Analytical Laboratory Evaluation (SALE) bimonthly data is described. Statistical procedures are discussed in Section A, followed by the descriptions of tabular and graphic values in Section B. Calculation formulae for the various statistics in the reports are presented in Section C. SALE data reported to New Brunswick Laboratory (NBL) are entered into a computerized system through routine data processing procedures. Bimonthly and annual reports are generated from this data system. In the bimonthly data analysis, data from the six most recent reporting periods of each laboratory-material-analytical method combination are utilized. Analysis results in the bimonthly reports are only presented for those participants who have reported data at least once during the last 12-month period. Reported values are transformed to relative percent difference values calculated by [(reported value - reference value)/reference value] x 100. Analysis of data is performed on these transformed values. Accordingly, the results given in the bimonthly report are (relative) percent differences (% DIFF). Suspect, large variations are verified with individual participants to eliminate errors in the transcription process. Statistical extreme values are not excluded from bimonthly analysis; all data are used

  17. Statistical methods for quality assurance

    International Nuclear Information System (INIS)

    Rinne, H.; Mittag, H.J.

    1989-01-01

    This is the first German-language textbook on quality assurance and the fundamental statistical methods that is suitable for private study. The material for this book has been developed from a course of Hagen Open University and is characterized by a particularly careful didactical design which is achieved and supported by numerous illustrations and photographs, more than 100 exercises with complete problem solutions, many fully displayed calculation examples, surveys fostering a comprehensive approach, bibliography with comments. The textbook has an eye to practice and applications, and great care has been taken by the authors to avoid abstraction wherever appropriate, to explain the proper conditions of application of the testing methods described, and to give guidance for suitable interpretation of results. The testing methods explained also include latest developments and research results in order to foster their adoption in practice. (orig.) [de

  18. Cluster Statistics of BTW Automata

    International Nuclear Information System (INIS)

    Ajanta Bhowal Acharyya

    2011-01-01

    The cluster statistics of BTW automata in the SOC states are obtained by extensive computer simulation. Various moments of the clusters are calculated and few results are compared with earlier available numerical estimates and exact results. Reasonably good agreement is observed. An extended statistical analysis has been made. (author)

  19. The Statistics of Health and Longevity

    DEFF Research Database (Denmark)

    Zarulli, Virginia

    Increases in human longevity have made it critical to distinguish healthy longevity from longevity without regard to health. We present a new method for calculating the statistics of healthy longevity which extends, in several directions, current calculations of health expectancy (HE) and disabil......Increases in human longevity have made it critical to distinguish healthy longevity from longevity without regard to health. We present a new method for calculating the statistics of healthy longevity which extends, in several directions, current calculations of health expectancy (HE......) and disability-adjusted life years (DALYs), from data on prevalence of health conditions. Current methods focus on binary conditions (e.g., disabled or not disabled) or on categorical classifications (e.g. in good, poor, or very bad health) and report only expectations. Our method, based on Markov chain theory...

  20. The development of mini project interactive media on junior statistical materials (developmental research in junior high school)

    Science.gov (United States)

    Fauziah, D.; Mardiyana; Saputro, D. R. S.

    2018-05-01

    Assessment is an integral part in the learning process. The process and the result should be in line, regarding to measure the ability of learners. Authentic assessment refers to a form of assessment that measures the competence of attitudes, knowledge, and skills. In fact, many teachers including mathematics teachers who have implemented curriculum based teaching 2013 feel confuse and difficult in mastering the use of authentic assessment instruments. Therefore, it is necessary to design an authentic assessment instrument with an interactive mini media project where teacher can adopt it in the assessment. The type of this research is developmental research. The developmental research refers to the 4D models development, which consist of four stages: define, design, develop and disseminate. The research purpose is to create a valid mini project interactive media on statistical materials in junior high school. The retrieved valid instrument based on expert judgment are 3,1 for eligibility constructions aspect, and 3,2 for eligibility presentation aspect, 3,25 for eligibility contents aspect, and 2,9 for eligibility didactic aspect. The research results obtained interactive mini media projects on statistical materials using Adobe Flash so it can help teachers and students in achieving learning objectives.

  1. Invert Effective Thermal Conductivity Calculation

    International Nuclear Information System (INIS)

    M.J. Anderson; H.M. Wade; T.L. Mitchell

    2000-01-01

    The objective of this calculation is to evaluate the temperature-dependent effective thermal conductivities of a repository-emplaced invert steel set and surrounding ballast material. The scope of this calculation analyzes a ballast-material thermal conductivity range of 0.10 to 0.70 W/m · K, a transverse beam spacing range of 0.75 to 1.50 meters, and beam compositions of A 516 carbon steel and plain carbon steel. Results from this calculation are intended to support calculations that identify waste package and repository thermal characteristics for Site Recommendation (SR). This calculation was developed by Waste Package Department (WPD) under Office of Civilian Radioactive Waste Management (OCRWM) procedure AP-3.12Q, Revision 1, ICN 0, Calculations

  2. Solenopsis ant magnetic material: statistical and seasonal studies

    International Nuclear Information System (INIS)

    Abraçado, Leida G; Esquivel, Darci M S; Wajnberg, Eliane

    2009-01-01

    In this paper, we quantify the magnetic material amount in Solenopsis ants using ferromagnetic resonance (FMR) at room temperature. We sampled S. interrupta workers from several morphologically indistinguishable castes. Twenty-five oriented samples of each body part of S. interrupta (20 units each) showed that FMR line shapes are reproducible. The relative magnetic material amount was 31 ± 12% (mean ± SD) in the antennae, 27 ± 13% in the head, 21 ± 12% in the thorax and 20 ± 10% in the abdomen. In order to measure variation in the magnetic material from late summer to early winter, ants were collected each month between March and July. The amount of magnetic material was greatest in all four body parts in March and least in all four body parts in June. In addition, S. richteri majors presented more magnetic material than minor workers. Extending these findings to the genera Solenopsis, the reduction in magnetic material found in winter could be explained by our sampling fewer foraging major ants

  3. Statistical nuclear reactions

    International Nuclear Information System (INIS)

    Hilaire, S.

    2001-01-01

    A review of the statistical model of nuclear reactions is presented. The main relations are described, together with the ingredients necessary to perform practical calculations. In addition, a substantial overview of the width fluctuation correction factor is given. (author)

  4. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  5. Statistics Using Just One Formula

    Science.gov (United States)

    Rosenthal, Jeffrey S.

    2018-01-01

    This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…

  6. Using robust statistics to improve neutron activation analysis results

    International Nuclear Information System (INIS)

    Zahn, Guilherme S.; Genezini, Frederico A.; Ticianelli, Regina B.; Figueiredo, Ana Maria G.

    2011-01-01

    Neutron activation analysis (NAA) is an analytical technique where an unknown sample is submitted to a neutron flux in a nuclear reactor, and its elemental composition is calculated by measuring the induced activity produced. By using the relative NAA method, one or more well-characterized samples (usually certified reference materials - CRMs) are irradiated together with the unknown ones, and the concentration of each element is then calculated by comparing the areas of the gamma ray peaks related to that element. When two or more CRMs are used as reference, the concentration of each element can be determined by several different ways, either using more than one gamma ray peak for that element (when available), or using the results obtained in the comparison with each CRM. Therefore, determining the best estimate for the concentration of each element in the sample can be a delicate issue. In this work, samples from three CRMs were irradiated together and the elemental concentration in one of them was calculated using the other two as reference. Two sets of peaks were analyzed for each element: a smaller set containing only the literature-recommended gamma-ray peaks and a larger one containing all peaks related to that element that could be quantified in the gamma-ray spectra; the most recommended transition was also used as a benchmark. The resulting data for each element was then reduced using up to five different statistical approaches: the usual (and not robust) unweighted and weighted means, together with three robust means: the Limitation of Relative Statistical Weight, Normalized Residuals and Rajeval. The resulting concentration values were then compared to the certified value for each element, allowing for discussion on both the performance of each statistical tool and on the best choice of peaks for each element. (author)

  7. Investigation of thermodynamic and mechanical properties of AlyIn1-yP alloys by statistical moment method

    Science.gov (United States)

    Ha, Vu Thi Thanh; Hung, Vu Van; Hanh, Pham Thi Minh; Tuyen, Nguyen Viet; Hai, Tran Thi; Hieu, Ho Khac

    2018-03-01

    The thermodynamic and mechanical properties of III-V zinc-blende AlP, InP semiconductors and their alloys have been studied in detail from statistical moment method taking into account the anharmonicity effects of the lattice vibrations. The nearest neighbor distance, thermal expansion coefficient, bulk moduli, specific heats at the constant volume and constant pressure of the zincblende AlP, InP and AlyIn1-yP alloys are calculated as functions of the temperature. The statistical moment method calculations are performed by using the many-body Stillinger-Weber potential. The concentration dependences of the thermodynamic quantities of zinc-blende AlyIn1-yP crystals have also been discussed and compared with those of the experimental results. Our results are reasonable agreement with earlier density functional theory calculations and can provide useful qualitative information for future experiments. The moment method then can be developed extensively for studying the atomistic structure and thermodynamic properties of nanoscale materials as well.

  8. Methods of statistical calculation of fast reactor core with account of influence of fuel assembly form change in process of campaign and other factors

    International Nuclear Information System (INIS)

    Sorokin, G.A.; Zhukov, A.V.; Bogoslovskaya, G.P.; Sorokin, A.P.

    2000-01-01

    The method of calculation of a temperature field in fast reactor core using criterion equal thermo-technical reliability of subassemblies in various zones throttling taking into account change thermohydraulic characteristics of subassemblies during campaign under influence change form of core, redistribution heat generation, casual any deviation of various parameters is stated. The distribution of the statistical characteristics of a temperature field in subassemblies is calculated on subchannel method with account of an interchannel exchange and feature of influence of deformation on a temperature field in subassemblies using Monte-Carlo method. The results of the calculations show that deformation can have significant influence on a temperature mode of core. It is necessary to make thermohydraulic analysis of core during campaign at a stage of preliminary study of the projects fast reactors. (author)

  9. Computed tomography assessment of the efficiency of different techniques for removal of root canal filling material

    International Nuclear Information System (INIS)

    Dall'agnol, Cristina; Barletta, Fernando Branco; Hartmann, Mateus Silveira Martins

    2008-01-01

    This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend (α=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)

  10. Computed tomography assessment of the efficiency of different techniques for removal of root canal filling material

    Energy Technology Data Exchange (ETDEWEB)

    Dall' agnol, Cristina; Barletta, Fernando Branco [Lutheran University of Brazil, Canoas, RS (Brazil). Dental School. Dept. of Dentistry and Endodontics]. E-mail: fbarletta@terra.com.br; Hartmann, Mateus Silveira Martins [Uninga Dental School, Passo Fundo, RS (Brazil). Postgraduate Program in Dentistry

    2008-07-01

    This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend ({alpha}=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)

  11. Calculated neutron-activation cross sections for E/sub n/ /le/ 100 MeV for a range of accelerator materials

    International Nuclear Information System (INIS)

    Bozoian, M.; Arthur, E.D.; Perry, R.T.; Wilson, W.B.; Young, P.G.

    1988-01-01

    Activation problems associated with particle accelerators are commonly dominated by reactions of secondary neutrons produced in reactions of beam particles with accelerator or beam stop materials. Measured values of neutron-activation cross sections above a few MeV are sparse. Calculations with the GNASH code have been made for neutrons incident on all stable nuclides of a range of elements common to accelerator materials. These elements include B, C, N, O, Ne, Mg, Al, Si, P, S, Ar, K, Ca, Cr, Mn, Fe, Co, Ni, Cu, Zn, Zr, Mo, Nd, and Sm. Calculations were made for a grid of incident neutron energies extending to 100 MeV. Cross sections leading to the direct production of as many as 87 activation products for each of 84 target nuclide were tabulated on this grid of neutron energies, each beginning with the threshold for the product nuclide's formation. Multigrouped values of these cross sections have been calculated and are being integrated into the cross-section library of the REAC-2 neutron activation code. Illustrative cross sections are presented. 20 refs., 6 figs., 1 tab

  12. Development of free statistical software enabling researchers to calculate confidence levels, clinical significance curves and risk-benefit contours

    International Nuclear Information System (INIS)

    Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.

    2003-01-01

    Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any

  13. Statistical analysis using the Bayesian nonparametric method for irradiation embrittlement of reactor pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Takamizawa, Hisashi, E-mail: takamizawa.hisashi@jaea.go.jp; Itoh, Hiroto, E-mail: ito.hiroto@jaea.go.jp; Nishiyama, Yutaka, E-mail: nishiyama.yutaka93@jaea.go.jp

    2016-10-15

    In order to understand neutron irradiation embrittlement in high fluence regions, statistical analysis using the Bayesian nonparametric (BNP) method was performed for the Japanese surveillance and material test reactor irradiation database. The BNP method is essentially expressed as an infinite summation of normal distributions, with input data being subdivided into clusters with identical statistical parameters, such as mean and standard deviation, for each cluster to estimate shifts in ductile-to-brittle transition temperature (DBTT). The clusters typically depend on chemical compositions, irradiation conditions, and the irradiation embrittlement. Specific variables contributing to the irradiation embrittlement include the content of Cu, Ni, P, Si, and Mn in the pressure vessel steels, neutron flux, neutron fluence, and irradiation temperatures. It was found that the measured shifts of DBTT correlated well with the calculated ones. Data associated with the same materials were subdivided into the same clusters even if neutron fluences were increased.

  14. Tandem mass spectrometry of human tryptic blood peptides calculated by a statistical algorithm and captured by a relational database with exploration by a general statistical analysis system.

    Science.gov (United States)

    Bowden, Peter; Beavis, Ron; Marshall, John

    2009-11-02

    A goodness of fit test may be used to assign tandem mass spectra of peptides to amino acid sequences and to directly calculate the expected probability of mis-identification. The product of the peptide expectation values directly yields the probability that the parent protein has been mis-identified. A relational database could capture the mass spectral data, the best fit results, and permit subsequent calculations by a general statistical analysis system. The many files of the Hupo blood protein data correlated by X!TANDEM against the proteins of ENSEMBL were collected into a relational database. A redundant set of 247,077 proteins and peptides were correlated by X!TANDEM, and that was collapsed to a set of 34,956 peptides from 13,379 distinct proteins. About 6875 distinct proteins were only represented by a single distinct peptide, 2866 proteins showed 2 distinct peptides, and 3454 proteins showed at least three distinct peptides by X!TANDEM. More than 99% of the peptides were associated with proteins that had cumulative expectation values, i.e. probability of false positive identification, of one in one hundred or less. The distribution of peptides per protein from X!TANDEM was significantly different than those expected from random assignment of peptides.

  15. Proton-rich nuclear statistical equilibrium

    International Nuclear Information System (INIS)

    Seitenzahl, I.R.; Timmes, F.X.; Marin-Lafleche, A.; Brown, E.; Magkotsios, G.; Truran, J.

    2008-01-01

    Proton-rich material in a state of nuclear statistical equilibrium (NSE) is one of the least studied regimes of nucleosynthesis. One reason for this is that after hydrogen burning, stellar evolution proceeds at conditions of an equal number of neutrons and protons or at a slight degree of neutron-richness. Proton-rich nucleosynthesis in stars tends to occur only when hydrogen-rich material that accretes onto a white dwarf or a neutron star explodes, or when neutrino interactions in the winds from a nascent proto-neutron star or collapsar disk drive the matter proton-rich prior to or during the nucleosynthesis. In this Letter we solve the NSE equations for a range of proton-rich thermodynamic conditions. We show that cold proton-rich NSE is qualitatively different from neutron-rich NSE. Instead of being dominated by the Fe-peak nuclei with the largest binding energy per nucleon that have a proton-to-nucleon ratio close to the prescribed electron fraction, NSE for proton-rich material near freezeout temperature is mainly composed of 56Ni and free protons. Previous results of nuclear reaction network calculations rely on this nonintuitive high-proton abundance, which this Letter explains. We show how the differences and especially the large fraction of free protons arises from the minimization of the free energy as a result of a delicate competition between the entropy and nuclear binding energy.

  16. Statistical mechanical calculations of molecular pair correlation functions and scattering intensities

    International Nuclear Information System (INIS)

    Bertagnolli, H.

    1978-01-01

    For the case of special molecular models representing the acetonitrile molecule the expansion coefficients of the molecular par distribution function are calculated by use of pertubation theory. These results are used to get theoretical access to scattering intensities in the frame of several approximations. The first model describes the molecule by three hard spheres and uses a hard sphere liquid as reference. In the second cast the calculations are based on an anisotropic Lennard-Jones potential by application of a model of overlapping ellipsoids and by use of a Lennard-Jones liquid as a reference system. In the third model dipolar attractive forces are taken into account with an anisotropic hard-sphere liquid as a reference. In the third model dipolar attractive forces are taken into account with an anisotropic hard-sphere liquid as a reference. Finally all the calculations with different intermolecular potentials are compared with neutron scattering experiments. (orig.) 891 HK [de

  17. Book Trade Research and Statistics. Prices of U.S. and Foreign Published Materials; Book Title Output and Average Prices: 2000 Final and 2001 Preliminary Figures; Book Sales Statistics, 2001: AAP Preliminary Estimates; U.S. Book Exports and Imports: 2001; Number of Book Outlets in the United States and Canada; Review Media Statistics.

    Science.gov (United States)

    Sullivan, Sharon G.; Barr, Catherine; Grabois, Andrew

    2002-01-01

    Includes six articles that report on prices of U.S. and foreign published materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and review media statistics. (LRW)

  18. Statistical analysis of magnetically soft particles in magnetorheological elastomers

    Science.gov (United States)

    Gundermann, T.; Cremer, P.; Löwen, H.; Menzel, A. M.; Odenbach, S.

    2017-04-01

    The physical properties of magnetorheological elastomers (MRE) are a complex issue and can be influenced and controlled in many ways, e.g. by applying a magnetic field, by external mechanical stimuli, or by an electric potential. In general, the response of MRE materials to these stimuli is crucially dependent on the distribution of the magnetic particles inside the elastomer. Specific knowledge of the interactions between particles or particle clusters is of high relevance for understanding the macroscopic rheological properties and provides an important input for theoretical calculations. In order to gain a better insight into the correlation between the macroscopic effects and microstructure and to generate a database for theoretical analysis, x-ray micro-computed tomography (X-μCT) investigations as a base for a statistical analysis of the particle configurations were carried out. Different MREs with quantities of 2-15 wt% (0.27-2.3 vol%) of iron powder and different allocations of the particles inside the matrix were prepared. The X-μCT results were edited by an image processing software regarding the geometrical properties of the particles with and without the influence of an external magnetic field. Pair correlation functions for the positions of the particles inside the elastomer were calculated to statistically characterize the distributions of the particles in the samples.

  19. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  20. The Euclid Statistical Matrix Tool

    Directory of Open Access Journals (Sweden)

    Curtis Tilves

    2017-06-01

    Full Text Available Stataphobia, a term used to describe the fear of statistics and research methods, can result from a lack of improper training in statistical methods. Poor statistical methods training can have an effect on health policy decision making and may play a role in the low research productivity seen in developing countries. One way to reduce Stataphobia is to intervene in the teaching of statistics in the classroom; however, such an intervention must tackle several obstacles, including student interest in the material, multiple ways of learning materials, and language barriers. We present here the Euclid Statistical Matrix, a tool for combatting Stataphobia on a global scale. This free tool is comprised of popular statistical YouTube channels and web sources that teach and demonstrate statistical concepts in a variety of presentation methods. Working with international teams in Iran, Japan, Egypt, Russia, and the United States, we have also developed the Statistical Matrix in multiple languages to address language barriers to learning statistics. By utilizing already-established large networks, we are able to disseminate our tool to thousands of Farsi-speaking university faculty and students in Iran and the United States. Future dissemination of the Euclid Statistical Matrix throughout the Central Asia and support from local universities may help to combat low research productivity in this region.

  1. Analysis of uncertainties of thermal hydraulic calculations

    International Nuclear Information System (INIS)

    Macek, J.; Vavrin, J.

    2002-12-01

    In 1993-1997 it was proposed, within OECD projects, that a common program should be set up for uncertainty analysis by a probabilistic method based on a non-parametric statistical approach for system computer codes such as RELAP, ATHLET and CATHARE and that a method should be developed for statistical analysis of experimental databases for the preparation of the input deck and statistical analysis of the output calculation results. Software for such statistical analyses would then have to be processed as individual tools independent of the computer codes used for the thermal hydraulic analysis and programs for uncertainty analysis. In this context, a method for estimation of a thermal hydraulic calculation is outlined and selected methods of statistical analysis of uncertainties are described, including methods for prediction accuracy assessment based on the discrete Fourier transformation principle. (author)

  2. Intuitive introductory statistics

    CERN Document Server

    Wolfe, Douglas A

    2017-01-01

    This textbook is designed to give an engaging introduction to statistics and the art of data analysis. The unique scope includes, but also goes beyond, classical methodology associated with the normal distribution. What if the normal model is not valid for a particular data set? This cutting-edge approach provides the alternatives. It is an introduction to the world and possibilities of statistics that uses exercises, computer analyses, and simulations throughout the core lessons. These elementary statistical methods are intuitive. Counting and ranking features prominently in the text. Nonparametric methods, for instance, are often based on counts and ranks and are very easy to integrate into an introductory course. The ease of computation with advanced calculators and statistical software, both of which factor into this text, allows important techniques to be introduced earlier in the study of statistics. This book's novel scope also includes measuring symmetry with Walsh averages, finding a nonp...

  3. Calculation of releases of radioactive materials in gaseous effluents from nuclear-powered merchant ships (NMS-GEFF code)

    International Nuclear Information System (INIS)

    Cardile, F.P.; Bangart, R.L.; Collins, J.T.

    1978-06-01

    The Intergovernmental Maritime Consultative Organization IMCO) is currently preparing guidelines concerning the safety of nuclear-powered merchant ships. An important aspect of these guidelines is the determination of the releases of radioactive material in effluents from these ships and the control exercised by the ships over these releases. To provide a method for the determination of these releases, the NRC staff has developed a computerized model, the NMS-GEFF Code, which is described in the following chapters. The NMS-GEFF Code calculates releases of radioactive material in gaseous effluents for nuclear-powered merchant ships using pressurized water reactors

  4. Statistical comparisons of Savannah River anemometer data applied to quality control of instrument networks

    International Nuclear Information System (INIS)

    Porch, W.M.; Dickerson, M.H.

    1976-08-01

    Continuous monitoring of extensive meteorological instrument arrays is a requirement in the study of important mesoscale atmospheric phenomena. The phenomena include pollution transport prediction from continuous area sources, or one time releases of toxic materials and wind energy prospecting in areas of topographic enhancement of the wind. Quality control techniques that can be applied to these data to determine if the instruments are operating within their prescribed tolerances were investigated. Savannah River Plant data were analyzed with both independent and comparative statistical techniques. The independent techniques calculate the mean, standard deviation, moments about the mean, kurtosis, skewness, probability density distribution, cumulative probability and power spectra. The comparative techniques include covariance, cross-spectral analysis and two dimensional probability density. At present the calculating and plotting routines for these statistical techniques do not reside in a single code so it is difficult to ascribe independent memory size and computation time accurately. However, given the flexibility of a data system which includes simple and fast running statistics at the instrument end of the data network (ASF) and more sophisticated techniques at the computational end (ACF) a proper balance will be attained. These techniques are described in detail and preliminary results are presented

  5. Statistical considerations on safety analysis

    International Nuclear Information System (INIS)

    Pal, L.; Makai, M.

    2004-01-01

    The authors have investigated the statistical methods applied to safety analysis of nuclear reactors and arrived at alarming conclusions: a series of calculations with the generally appreciated safety code ATHLET were carried out to ascertain the stability of the results against input uncertainties in a simple experimental situation. Scrutinizing those calculations, we came to the conclusion that the ATHLET results may exhibit chaotic behavior. A further conclusion is that the technological limits are incorrectly set when the output variables are correlated. Another formerly unnoticed conclusion of the previous ATHLET calculations that certain innocent looking parameters (like wall roughness factor, the number of bubbles per unit volume, the number of droplets per unit volume) can influence considerably such output parameters as water levels. The authors are concerned with the statistical foundation of present day safety analysis practices and can only hope that their own misjudgment will be dispelled. Until then, the authors suggest applying correct statistical methods in safety analysis even if it makes the analysis more expensive. It would be desirable to continue exploring the role of internal parameters (wall roughness factor, steam-water surface in thermal hydraulics codes, homogenization methods in neutronics codes) in system safety codes and to study their effects on the analysis. In the validation and verification process of a code one carries out a series of computations. The input data are not precisely determined because measured data have an error, calculated data are often obtained from a more or less accurate model. Some users of large codes are content with comparing the nominal output obtained from the nominal input, whereas all the possible inputs should be taken into account when judging safety. At the same time, any statement concerning safety must be aleatory, and its merit can be judged only when the probability is known with which the

  6. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  7. Core calculations of JMTR

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    1998-03-01

    In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)

  8. Calculating Student Grades.

    Science.gov (United States)

    Allswang, John M.

    1986-01-01

    This article provides two short microcomputer gradebook programs. The programs, written in BASIC for the IBM-PC and Apple II, provide statistical information about class performance and calculate grades either on a normal distribution or based on teacher-defined break points. (JDH)

  9. Statistical data analysis using SAS intermediate statistical methods

    CERN Document Server

    Marasinghe, Mervyn G

    2018-01-01

    The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...

  10. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.

  11. Radiation counting statistics

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)

  12. Radiation counting statistics

    International Nuclear Information System (INIS)

    Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.

    1999-08-01

    This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs

  13. Statistical mechanics of stochastic neural networks: Relationship between the self-consistent signal-to-noise analysis, Thouless-Anderson-Palmer equation, and replica symmetric calculation approaches

    International Nuclear Information System (INIS)

    Shiino, Masatoshi; Yamana, Michiko

    2004-01-01

    We study the statistical mechanical aspects of stochastic analog neural network models for associative memory with correlation type learning. We take three approaches to derive the set of the order parameter equations for investigating statistical properties of retrieval states: the self-consistent signal-to-noise analysis (SCSNA), the Thouless-Anderson-Palmer (TAP) equation, and the replica symmetric calculation. On the basis of the cavity method the SCSNA can be generalized to deal with stochastic networks. We establish the close connection between the TAP equation and the SCSNA to elucidate the relationship between the Onsager reaction term of the TAP equation and the output proportional term of the SCSNA that appear in the expressions for the local fields

  14. Exact-exchange-based quasiparticle calculations

    International Nuclear Information System (INIS)

    Aulbur, Wilfried G.; Staedele, Martin; Goerling, Andreas

    2000-01-01

    One-particle wave functions and energies from Kohn-Sham calculations with the exact local Kohn-Sham exchange and the local density approximation (LDA) correlation potential [EXX(c)] are used as input for quasiparticle calculations in the GW approximation (GWA) for eight semiconductors. Quasiparticle corrections to EXX(c) band gaps are small when EXX(c) band gaps are close to experiment. In the case of diamond, quasiparticle calculations are essential to remedy a 0.7 eV underestimate of the experimental band gap within EXX(c). The accuracy of EXX(c)-based GWA calculations for the determination of band gaps is as good as the accuracy of LDA-based GWA calculations. For the lowest valence band width a qualitatively different behavior is observed for medium- and wide-gap materials. The valence band width of medium- (wide-) gap materials is reduced (increased) in EXX(c) compared to the LDA. Quasiparticle corrections lead to a further reduction (increase). As a consequence, EXX(c)-based quasiparticle calculations give valence band widths that are generally 1-2 eV smaller (larger) than experiment for medium- (wide-) gap materials. (c) 2000 The American Physical Society

  15. Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio

    2006-01-01

    As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t

  16. WIMS-IST/DRAGON-IST side-step calculation of reactivity device and structural material incremental cross sections for Wolsong NPP Unit 1

    International Nuclear Information System (INIS)

    Dahmani, M.; McArthur, R.; Kim, B.G.; Kim, S.M.; Seo, H.-B.

    2008-01-01

    This paper describes the calculation of two-group incremental cross sections for all of the reactivity devices and incore structural materials for an RFSP-IST full-core model of Wolsong NPP Unit 1, in support of the conversion of the reference plant model to two energy groups. This is of particular interest since the calculation used the new standard 'side-step' approach, which is a three-dimensional supercell method that employs the Industry Standard Toolset (IST) codes DRAGON-IST and WIMS-IST with the ENDF/B-VI nuclear data library. In this technique, the macroscopic cross sections for the fuel regions and the device material specifications are first generated using the lattice code WIMS-IST with 89 energy groups. DRAGON-IST then uses this data with a standard supercell modelling approach for the three-dimensional calculations. Incremental cross sections are calculated for the stainless-steel adjuster rods (SS-ADJ), the liquid zone control units (LZCU), the shutoff rods (SOR), the mechanical control absorbers (MCA) and various structural materials, such as guide tubes, springs, locators, brackets, adjuster cables and support bars and the moderator inlet nozzle deflectors. Isotopic compositions of the Zircaloy-2, stainless steel and Inconel X-750 alloys in these items are derived from Wolsong NPP Unit 1 history dockets. Their geometrical layouts are based on applicable design drawings. Mid-burnup fuel with no moderator poison was assumed. The incremental cross sections and key aspects of the modelling are summarized in this paper. (author)

  17. Guidelines for calculating radiation doses to the public from a release of airborne radioactive material under hypothetical accident conditions in nuclear reactors

    International Nuclear Information System (INIS)

    1991-04-01

    This standard provides guidelines and a methodology for calculating effective doses and thyroid doses to people (either individually or collectively) in the path of airborne radioactive material released from a nuclear facility following a hypothetical accident. The radionuclides considered are those associated with substances having the greatest potential for becoming airborne in reactor accidents: tritium (HTO), noble gases and their daughters, radioiodines, and certain radioactive particulates (Cs, Ru, Sr, Te). The standard focuses on the calculation of radiation doses for external exposures from radioactive material in the cloud; internal exposures for inhalation of radioactive material in the cloud and skin penetration of tritium; and external exposures from radionuclides deposited on the ground. It uses as modified Gaussian plume model to evaluate the time-integrated concentration downwind. (52 refs., 12 tabs., 21 figs.)

  18. Statistics in a Nutshell

    CERN Document Server

    Boslaugh, Sarah

    2008-01-01

    Need to learn statistics as part of your job, or want some help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference that's perfect for anyone with no previous background in the subject. This book gives you a solid understanding of statistics without being too simple, yet without the numbing complexity of most college texts. You get a firm grasp of the fundamentals and a hands-on understanding of how to apply them before moving on to the more advanced material that follows. Each chapter presents you with easy-to-follow descriptions illustrat

  19. Statistical analyses of variability/reproducibility of environmentally assisted cyclic crack growth rate data utilizing JAERI Material Performance Database (JMPD)

    International Nuclear Information System (INIS)

    Tsuji, Hirokazu; Yokoyama, Norio; Nakajima, Hajime; Kondo, Tatsuo

    1993-05-01

    Statistical analyses were conducted by using the cyclic crack growth rate data for pressure vessel steels stored in the JAERI Material Performance Database (JMPD), and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and by ΔK-constant type tests. Based on the results of the statistical analyses, it was concluded that ΔK-constant type tests are generally superior to the commonly used ΔK-increasing type ones from the viewpoint of variability and/or reproducibility of the data. Such a tendency was more pronounced in the tests conducted in simulated LWR primary coolants than those in air. (author)

  20. A computer program for calculation of reliable pair distribution functions of non-crystalline materials from limited diffraction data. III

    International Nuclear Information System (INIS)

    Hansen, F.Y.

    1978-01-01

    This program calculates the final pair distribution functions of non-crystalline materials on the basis of the experimental structure factor as calculated in part I and the parameters of the small distance part of the pair distribution function as calculated in part II. In this way, truncation error may be eliminated from the final pair distribution function. The calculations with this program depend on the results of calculations with the programs described in parts I and II. The final pair distribution function is calculated by a Fourier transform of a combination of an experimental structure factor and a model structure factor. The storage requirement depends on the number of data points in the structure factor, the number of data points in the final pair distribution function and the number of peaks necessary to resolve the small distance part of the pair distribution function. In the present set-up a storage requirement is set to 8860 words which is estimated to be satisfactory for a large number of cases. (Auth.)

  1. Statistical probability tables CALENDF program

    International Nuclear Information System (INIS)

    Ribon, P.

    1989-01-01

    The purpose of the probability tables is: - to obtain dense data representation - to calculate integrals by quadratures. They are mainly used in the USA for calculations by Monte Carlo and in the USSR and Europe for self-shielding calculations by the sub-group method. The moment probability tables, in addition to providing a more substantial mathematical basis and calculation methods, are adapted for condensation and mixture calculations, which are the crucial operations for reactor physics specialists. However, their extension is limited by the statistical hypothesis they imply. Efforts are being made to remove this obstacle, at the cost, it must be said, of greater complexity

  2. Statistical studies on the light output and energy resolution of small LSO single crystals with different surface treatments combined with various reflector materials

    CERN Document Server

    Heinrichs, U; Bussmann, N; Engels, R; Kemmerling, G; Weber, S; Ziemons, K

    2002-01-01

    The optimization of light output and energy resolution of scintillators is of special interest for the development of high resolution and high sensitivity PET. The aim of this work is to obtain statistically reliable results concerning optimal surface treatment of scintillation crystals and the selection of reflector material. For this purpose, raw, mechanically polished and etched LSO crystals (size 2x2x10 mm sup 3) were combined with various reflector materials (Teflon tape, Teflon matrix, BaSO sub 4) and exposed to a sup 2 sup 2 Na source. In order to ensure the statistical reliability of the results, groups of 10 LSO crystals each were measured for all combinations of surface treatment and reflector material. Using no reflector material the light output increased up to 551+-35% by mechanical polishing the surface compared to 100+-5% for raw crystals. Etching the surface increased the light output to 441+-29%. The untreated crystals had an energy resolution of 24.6+-4.0%. By mechanical polishing the surfac...

  3. Statistical aspects of nuclear structure

    International Nuclear Information System (INIS)

    Parikh, J.C.

    1977-01-01

    The statistical properties of energy levels and a statistical approach to transition strengths are discussed in relation to nuclear structure studies at high excitation energies. It is shown that the calculations can be extended to the ground state domain also. The discussion is based on the study of random matrix theory of level density and level spacings, using the Gaussian Orthogonal Ensemble (GOE) concept. The short range and long range correlations are also studied statistically. The polynomial expansion method is used to obtain excitation strengths. (A.K.)

  4. CALCULATION OF STATISTICAL INDICATORS FOR EVALUATION OF SCIENTIFIC AND TECHNOLOGICAL ACTIVITIES IN THE FEDERAL BODIES OF EXECUTIVE POWER AND OF THE MINISTRY OF THE INTERNAK AFFAIRS OF THE RUSSIAN FEDERATION

    Directory of Open Access Journals (Sweden)

    Dmitriy V. Dianov

    2016-01-01

    Full Text Available Scientific and technical activity is part of development work. Is very important to plan scientic and technical activities. The article discusses the types of reports, calculation of statistical indicators, built charts and diagrams. These data will help to analyze the execution plans. Statistical indicators can be used in the Ministry of internal Affairs of the Russian Federation and other departments.

  5. Wind energy statistics

    International Nuclear Information System (INIS)

    Holttinen, H.; Tammelin, B.; Hyvoenen, R.

    1997-01-01

    The recording, analyzing and publishing of statistics of wind energy production has been reorganized in cooperation of VTT Energy, Finnish Meteorological (FMI Energy) and Finnish Wind Energy Association (STY) and supported by the Ministry of Trade and Industry (KTM). VTT Energy has developed a database that contains both monthly data and information on the wind turbines, sites and operators involved. The monthly production figures together with component failure statistics are collected from the operators by VTT Energy, who produces the final wind energy statistics to be published in Tuulensilmae and reported to energy statistics in Finland and abroad (Statistics Finland, Eurostat, IEA). To be able to verify the annual and monthly wind energy potential with average wind energy climate a production index in adopted. The index gives the expected wind energy production at various areas in Finland calculated using real wind speed observations, air density and a power curve for a typical 500 kW-wind turbine. FMI Energy has produced the average figures for four weather stations using the data from 1985-1996, and produces the monthly figures. (orig.)

  6. Statistical Methods in Psychology Journals.

    Science.gov (United States)

    Willkinson, Leland

    1999-01-01

    Proposes guidelines for revising the American Psychological Association (APA) publication manual or other APA materials to clarify the application of statistics in research reports. The guidelines are intended to induce authors and editors to recognize the thoughtless application of statistical methods. Contains 54 references. (SLD)

  7. Book Trade Research and Statistics. Prices of U.S. and Foreign Published Materials; Book Title Output and Average Prices: 2001 Final and 2002 Preliminary Figures; Book Sales Statistics, 2002: AAP Preliminary Estimates; U.S. Book Exports and Imports:2002; Number of Book Outlets in the United States and Canada; Review Media Statistics.

    Science.gov (United States)

    Sullivan, Sharon G.; Grabois, Andrew; Greco, Albert N.

    2003-01-01

    Includes six reports related to book trade statistics, including prices of U.S. and foreign materials; book title output and average prices; book sales statistics; book exports and imports; book outlets in the U.S. and Canada; and numbers of books and other media reviewed by major reviewing publications. (LRW)

  8. CONTAIN calculations

    International Nuclear Information System (INIS)

    Scholtyssek, W.

    1995-01-01

    In the first phase of a benchmark comparison, the CONTAIN code was used to calculate an assumed EPR accident 'medium-sized leak in the cold leg', especially for the first two days after initiation of the accident. The results for global characteristics compare well with those of FIPLOC, MELCOR and WAVCO calculations, if the same materials data are used as input. However, significant differences show up for local quantities such as flows through leakages. (orig.)

  9. The Effect of Material Homogenization in Calculating the Gamma-Ray dose from Spent PWR Fuel Pins in an Air Medium

    International Nuclear Information System (INIS)

    TH Trumbull

    2005-01-01

    The effect of material homogenization on the calculated dose rate was studied for several arrangements of typical PWR spent fuel pins in an air medium using the Monte Carlo code, MCNP. The models analyzed increased in geometric complexity, beginning with a single fuel pin, progressing to ''small'' lattices, i.e., 3x3, 5x5, 7x7 fuel pins, and culminating with a full 17x17 pin PWR bundle analysis. The fuel pin dimensions and compositions were taken directly from a previous study and efforts were made to parallel this study by specifying identical flux-to-dose functions and gamma-ray source spectra. The analysis shows two competing components to the overall effect of material homogenization on calculated dose rate. Homogenization of pin lattices tends to lower the effect of radiation ''channeling'' but increase the effect of ''source redistribution.'' Depending on the size of the lattice and location of the detectors, the net effect of material homogenization on dose rate can be insignificant or range from a 6% decrease to a 35% increase relative to the detailed geometry model

  10. Statistical analysis of simulation calculation of sputtering for two interaction potentials

    International Nuclear Information System (INIS)

    Shao Qiyun

    1992-01-01

    The effects of the interaction potentials (Moliere potential and Universal potential) are presented on computer simulation results of sputtering via Monte Carlo simulation based on the binary collision approximation. By means of Wilcoxon two-Sample paired sign rank test, the statistically significant difference for the above results is obtained

  11. Statistics for Petroleum Engineers and Geoscientists

    International Nuclear Information System (INIS)

    Jensen, J.L.; Lake, L.W.; Corbett, P.W.M.; Goggin, D.J.

    2000-01-01

    Geostatistics is a common tool in reservoir characterisation. Several texts discuss the subject, however this book differs in its approach and audience from currently available material. Written from the basics of statistics it covers only those topics that are needed for the two goals of the text: to exhibit the diagnostic potential of statistics and to introduce the important features of statistical modeling. This revised edition contains expanded discussions of some materials, in particular conditional probabilities, Bayes Theorem, correlation, and Kriging. The coverage of estimation, variability, and modeling applications have been updated. Seventy examples illustrate concepts and show the role of geology for providing important information for data analysis and model building. Four reservoir case studies conclude the presentation, illustrating the application and importance of the earlier material. This book can help petroleum professionals develop more accurate models, leading to lower sampling costs

  12. RepoSTAR. A Code package for control and evaluation of statistical calculations with the program package RepoTREND; RepoSTAR. Ein Codepaket zur Steuerung und Auswertung statistischer Rechenlaeufe mit dem Programmpaket RepoTREND

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Dirk-Alexander

    2016-05-15

    The program package RepoTREND for the integrated long terms safety analysis of final repositories allows besides deterministic studies of defined problems also statistical or probabilistic analyses. Probabilistic uncertainty and sensitivity analyses are realized in the program package repoTREND by a specific statistic frame called RepoSTAR. The report covers the following issues: concept, sampling and data supply of single simulations, evaluation of statistical calculations with the program RepoSUN.

  13. Low temperature rheological properties of asphalt mixtures containing different recycled asphalt materials

    Directory of Open Access Journals (Sweden)

    Ki Hoon Moon

    2017-01-01

    Full Text Available Reclaimed Asphalt Pavement (RAP and Recycled Asphalt Shingles (RAS are valuable materials commonly reused in asphalt mixtures due to their economic and environmental benefits. However, the aged binder contained in these materials may negatively affect the low temperature performance of asphalt mixtures. In this paper, the effect of RAP and RAS on low temperature properties of asphalt mixtures is investigated through Bending Beam Rheometer (BBR tests and rheological modeling. First, a set of fourteen asphalt mixtures containing RAP and RAS is prepared and creep stiffness and m-value are experimentally measured. Then, thermal stress is calculated and graphically and statistically compared. The Huet model and the Shift-Homothety-Shift in time-Shift (SHStS transformation, developed at the École Nationale des Travaux Publics de l'État (ENTPE, are used to back calculate the asphalt binder creep stiffness from mixture experimental data. Finally, the model predictions are compared to the creep stiffness of the asphalt binders extracted from each mixture, and the results are analyzed and discussed. It is found that an addition of RAP and RAS beyond 15% and 3%, respectively, significantly change the low temperature properties of asphalt mixture. Differences between back-calculated results and experimental data suggest that blending between new and old binder occurs only partially. Based on the recent finding on diffusion studies, this effect may be associated to mixing and blending processes, to the effective contact between virgin and recycled materials and to the variation of the total virgin-recycled thickness of the binder film which may significantly influence the diffusion process. Keywords: Reclaimed Asphalt Pavement (RAP, Recycled Asphalt Shingles (RAS, Thermal stress, Statistical comparison, Back-calculation, Binder blending

  14. Conformational energy calculations on polypeptides and proteins: use of a statistical mechanical procedure for evaluating structure and properties.

    Science.gov (United States)

    Scheraga, H A; Paine, G H

    1986-01-01

    We are using a variety of theoretical and computational techniques to study protein structure, protein folding, and higher-order structures. Our earlier work involved treatments of liquid water and aqueous solutions of nonpolar and polar solutes, computations of the stabilities of the fundamental structures of proteins and their packing arrangements, conformations of small cyclic and open-chain peptides, structures of fibrous proteins (collagen), structures of homologous globular proteins, introduction of special procedures as constraints during energy minimization of globular proteins, and structures of enzyme-substrate complexes. Recently, we presented a new methodology for predicting polypeptide structure (described here); the method is based on the calculation of the probable and average conformation of a polypeptide chain by the application of equilibrium statistical mechanics in conjunction with an adaptive, importance sampling Monte Carlo algorithm. As a test, it was applied to Met-enkephalin.

  15. Using Microsoft Excel[R] to Calculate Descriptive Statistics and Create Graphs

    Science.gov (United States)

    Carr, Nathan T.

    2008-01-01

    Descriptive statistics and appropriate visual representations of scores are important for all test developers, whether they are experienced testers working on large-scale projects, or novices working on small-scale local tests. Many teachers put in charge of testing projects do not know "why" they are important, however, and are utterly convinced…

  16. Effective Permittivity for FDTD Calculation of Plasmonic Materials

    Directory of Open Access Journals (Sweden)

    James B. Cole

    2012-03-01

    Full Text Available We present a new effective permittivity (EP model to accurately calculate surface plasmons (SPs using the finite-difference time-domain (FDTD method. The computational representation of physical structures with curved interfaces causes inherent errors in FDTD calculations, especially when the numerical grid is coarse. Conventional EP models improve the errors, but they are not effective for SPs because the SP resonance condition determined by the original permittivity is changed by the interpolated EP values. We perform FDTD simulations using the proposed model for an infinitely-long silver cylinder and gold sphere, and the results are compared with Mie theory. Our model gives better accuracy than the conventional staircase and EP models for SPs.

  17. Laminated materials with plastic interfaces: modeling and calculation

    International Nuclear Information System (INIS)

    Sandino Aquino de los Ríos, Gilberto; Castañeda Balderas, Rubén; Diaz Diaz, Alberto; Duong, Van Anh; Chataigner, Sylvain; Caron, Jean-François; Ehrlacher, Alain; Foret, Gilles

    2009-01-01

    In this paper, a model of laminated plates called M4-5N and validated in a previous paper is modified in order to take into account interlaminar plasticity by means of displacement discontinuities at the interfaces. These discontinuities are calculated by adapting a 3D plasticity model. In order to compute the model, a Newton–Raphson-like method is employed. In this method, two sub-problems are considered: one is linear and the other is non-linear. In the linear problem the non-linear equations of the model are linearized and the calculations are performed by making use of a finite element software. By iterating the resolution of each sub-problem, one obtains after convergence the solution of the global problem. The model is then applied to the problem of a double lap, adhesively bonded joint subjected to a tensile load. The adhesive layer is modeled by an elastic–plastic interface. The results of the M4-5N model are compared with those of a commercial finite element software. A good agreement between the two computation techniques is obtained and validates the non-linear calculations proposed in this paper. Finally, the numerical tool and a delamination criterion are applied to predict delamination onset in composite laminates

  18. Understanding the interfacial properties of graphene-based materials/BiOI heterostructures by DFT calculations

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Wen-Wu [Faculty of Materials Science and Engineering, Kunming University of Science and Technology, Kunming 650093 (China); Zhao, Zong-Yan, E-mail: zzy@kmust.edu.cn [Faculty of Materials Science and Engineering, Kunming University of Science and Technology, Kunming 650093 (China); Jiangsu Provincial Key Laboratory for Nanotechnology, Nanjing University, Nanjing 210093 (China)

    2017-06-01

    Highlights: • Heterostructure constructing is an effective way to enhance the photocatalytic performance. • Graphene-like materials and BiOI were in contact and formed van der Waals heterostructures. • Band edge positions of GO/g-C{sub 3}N{sub 4} and BiOI changed to form standard type-II heterojunction. • 2D materials can promote the separation of photo-generated electron-hole pairs in BiOI. - Abstract: Heterostructure constructing is a feasible and powerful strategy to enhance the performance of photocatalysts, because they can be tailored to have desirable photo-electronics properties and couple distinct advantageous of components. As a novel layered photocatalyst, the main drawback of BiOI is the low edge position of the conduction band. To address this problem, it is meaningful to find materials that possess suitable band gap, proper band edge position, and high mobility of carrier to combine with BiOI to form hetertrostructure. In this study, graphene-based materials (including: graphene, graphene oxide, and g-C{sub 3}N{sub 4}) were chosen as candidates to achieve this purpose. The charge transfer, interface interaction, and band offsets are focused on and analyzed in detail by DFT calculations. Results indicated that graphene-based materials and BiOI were in contact and formed van der Waals heterostructures. The valence and conduction band edge positions of graphene oxide, g-C{sub 3}N{sub 4} and BiOI changed with the Fermi level and formed the standard type-II heterojunction. In addition, the overall analysis of charge density difference, Mulliken population, and band offsets indicated that the internal electric field is facilitate for the separation of photo-generated electron-hole pairs, which means these heterostructures can enhance the photocatalytic efficiency of BiOI. Thus, BiOI combines with 2D materials to construct heterostructure not only make use of the unique high electron mobility, but also can adjust the position of energy bands and

  19. Statistical theory of breakup reactions

    International Nuclear Information System (INIS)

    Bertulani, Carlos A.; Descouvemont, Pierre; Hussein, Mahir S.

    2014-01-01

    We propose an alternative for Coupled-Channels calculations with loosely bound exotic nuclei (CDCC), based on the the Random Matrix Model of the statistical theory of nuclear reactions. The coupled channels equations are divided into two sets. The first set, described by the CDCC, and the other set treated with RMT. The resulting theory is a Statistical CDCC (CDCC s ), able in principle to take into account many pseudo channels. (author)

  20. Nuclear material control and accounting system evaluation in uranium conversion operations

    International Nuclear Information System (INIS)

    Moreira, Jose Pontes

    1994-01-01

    The Nuclear Material Control and Accounting Systems in uranium conversion operations are described. The conversion plant, uses ammonium diuranate (ADU), as starting material for the production of uranium hexafluoride. A combination of accountability and verification measurement is used to verify physical inventory quantities. Two types of inspection are used to minimize the measurements uncertainty of the Material Unaccounted For (MUF) : Attribute inspection and Variation inspection. The mass balance equation is the base of an evaluation of a Material Balance Area (MBA). Statistical inference is employed to facilitate rapid inventory taking and enhance material control of Safeguards. The calculation of one sampling plan for a MBA and the methodology of inspection evaluation are also described. We have two kinds of errors : no detection and false delation. (author)

  1. A guide to TIRION 4 - a computer code for calculating the consequences of releasing radioactive material to the atmosphere

    International Nuclear Information System (INIS)

    Fryer, L.S.

    1978-12-01

    TIRION 4 is the most recent program in a series designed to calculate the consequences of releasing radioactive material to the atmosphere. A brief description of the models used in the program and full details of the various control cards necessary to run TIRION 4 are given. (author)

  2. Statistical methods in nuclear theory

    International Nuclear Information System (INIS)

    Shubin, Yu.N.

    1974-01-01

    The paper outlines statistical methods which are widely used for describing properties of excited states of nuclei and nuclear reactions. It discusses physical assumptions lying at the basis of known distributions between levels (Wigner, Poisson distributions) and of widths of highly excited states (Porter-Thomas distribution, as well as assumptions used in the statistical theory of nuclear reactions and in the fluctuation analysis. The author considers the random matrix method, which consists in replacing the matrix elements of a residual interaction by random variables with a simple statistical distribution. Experimental data are compared with results of calculations using the statistical model. The superfluid nucleus model is considered with regard to superconducting-type pair correlations

  3. Summary Statistics for Homemade ?Play Dough? -- Data Acquired at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, J S; Morales, K E; Whipple, R E; Huber, R D; Martz, A; Brown, W D; Smith, J A; Schneberk, D J; Martz, Jr., H E; White, III, W T

    2010-03-11

    Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a homemade Play Dough{trademark}-like material, designated as PDA. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2700 LMHU{sub D} 100kVp to a low of about 1200 LMHUD at 300kVp. The standard deviation of each measurement is around 10% to 15% of the mean. The entropy covers the range from 6.0 to 7.4. Ordinarily, we would model the LAC of the material and compare the modeled values to the measured values. In this case, however, we did not have the detailed chemical composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 10. LLNL prepared about 50mL of the homemade 'Play Dough' in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference

  4. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  5. Mathematical programmes for calculator type PTK 1072 to calculate the radioactive contamination in foods

    International Nuclear Information System (INIS)

    Varga, E.; Visi, Gy.

    1982-01-01

    Mathematical programmes are given for calculator type PTK 1072 (Hungarian made), to make easier the lengthy calculations applied in examinations in laboratories for control of radioactive materials in food. Basic consideration of making a programme, the method, the mathematical formulae, the variations of calculation and control of program are shown by examples. Making programmes for calculators of other types, too, can be facilitated by adapting the basic consideration. (author)

  6. Proteomic patterns analysis with multivariate calculations as a promising tool for prompt differentiation of early stage lung tissue with cancer and unchanged tissue material

    Directory of Open Access Journals (Sweden)

    Grodzki Tomasz

    2011-03-01

    Full Text Available Abstract Background Lung cancer diagnosis in tissue material with commonly used histological techniques is sometimes inconvenient and in a number of cases leads to ambiguous conclusions. Frequently advanced immunostaining techniques have to be employed, yet they are both time consuming and limited. In this study a proteomic approach is presented which may help provide unambiguous pathologic diagnosis of tissue material. Methods Lung tissue material found to be pathologically changed was prepared to isolate proteome with fast and non selective procedure. Isolated peptides and proteins in ranging from 3.5 to 20 kDa were analysed directly using high resolution mass spectrometer (MALDI-TOF/TOF with sinapic acid as a matrix. Recorded complex spectra of a single run were then analyzed with multivariate statistical analysis algorithms (principle component analysis, classification methods. In the applied protocol we focused on obtaining the spectra richest in protein signals constituting a pattern of change within the sample containing detailed information about its protein composition. Advanced statistical methods were to indicate differences between examined groups. Results Obtained results indicate changes in proteome profiles of changed tissues in comparison to physiologically unchanged material (control group which were reflected in the result of principle component analysis (PCA. Points representing spectra of control group were located in different areas of multidimensional space and were less diffused in comparison to cancer tissues. Three different classification algorithms showed recognition capability of 100% regarding classification of examined material into an appropriate group. Conclusion The application of the presented protocol and method enabled finding pathological changes in tissue material regardless of localization and size of abnormalities in the sample volume. Proteomic profile as a complex, rich in signals spectrum of proteins

  7. Statistically sound evaluation of trace element depth profiles by ion beam analysis

    International Nuclear Information System (INIS)

    Schmid, K.; Toussaint, U. von

    2012-01-01

    This paper presents the underlying physics and statistical models that are used in the newly developed program NRADC for fully automated deconvolution of trace level impurity depth profiles from ion beam data. The program applies Bayesian statistics to find the most probable depth profile given ion beam data measured at different energies and angles for a single sample. Limiting the analysis to % level amounts of material allows one to linearize the forward calculation of ion beam data which greatly improves the computation speed. This allows for the first time to apply the maximum likelihood approach to both the fitting of the experimental data and the determination of confidence intervals of the depth profiles for real world applications. The different steps during the automated deconvolution will be exemplified by applying the program to artificial and real experimental data.

  8. Reasoning with data an introduction to traditional and Bayesian statistics using R

    CERN Document Server

    Stanton, Jeffrey M

    2017-01-01

    Engaging and accessible, this book teaches readers how to use inferential statistical thinking to check their assumptions, assess evidence about their beliefs, and avoid overinterpreting results that may look more promising than they really are. It provides step-by-step guidance for using both classical (frequentist) and Bayesian approaches to inference. Statistical techniques covered side by side from both frequentist and Bayesian approaches include hypothesis testing, replication, analysis of variance, calculation of effect sizes, regression, time series analysis, and more. Students also get a complete introduction to the open-source R programming language and its key packages. Throughout the text, simple commands in R demonstrate essential data analysis skills using real-data examples. The companion website provides annotated R code for the book's examples, in-class exercises, supplemental reading lists, and links to online videos, interactive materials, and other resources.

  9. A model development for a thermohydraulic calculation material convection of MTR (Materials Testing Reactors)

    International Nuclear Information System (INIS)

    Abbate, P.

    1990-01-01

    The CONVEC program developed for the thermohydraulic calculation under a natural convection regime for MTR type reactors is presented. The program is based on a stationary, one dimensional model of finite differences that allow to calculate the temperatures of cooler, cladding and fuel as well as the flow for a power level specified by the user. This model has been satisfactorily validated by a water cooling (liquid phase) and air system. (Author) [es

  10. Non-Poissonian photon statistics from macroscopic photon cutting materials.

    Science.gov (United States)

    de Jong, Mathijs; Meijerink, Andries; Rabouw, Freddy T

    2017-05-24

    In optical materials energy is usually extracted only from the lowest excited state, resulting in fundamental energy-efficiency limits such as the Shockley-Queisser limit for single-junction solar cells. Photon-cutting materials provide a way around such limits by absorbing high-energy photons and 'cutting' them into multiple low-energy excitations that can subsequently be extracted. The occurrence of photon cutting or quantum cutting has been demonstrated in a variety of materials, including semiconductor quantum dots, lanthanides and organic dyes. Here we show that photon cutting results in bunched photon emission on the timescale of the excited-state lifetime, even when observing a macroscopic number of optical centres. Our theoretical derivation matches well with experimental data on NaLaF 4 :Pr 3+ , a material that can cut deep-ultraviolet photons into two visible photons. This signature of photon cutting can be used to identify and characterize new photon-cutting materials unambiguously.

  11. DIFFERENTIAL EQUATION SIMULATION IN CALCULATION OF LATERAL AND TRANSVERSE-LONGITUDINAL BENDING OF FRAME STRUCTURES WITHOUT AND WITH DUE ACCOUNT OF VISCOELASTIC MATERIAL PROPERTIES

    Directory of Open Access Journals (Sweden)

    V. M. Ovsianko

    2012-01-01

    Full Text Available The paper reveals a brand-new direction in simulation of frame and continual structures while calculating static and dynamic loads and stability.  An electronic model has been synthesized  for an investigated object and then it has been analyzed not with the help of  specialized analog computing techniques but by means of high-performance software package for electronic circuit calculation using a personal computer.The given paper contains exact algebraic equations corresponding to differential equations for lateral bending calculation of frame structures without and with due account of viscoelastic material properties in compliance with the Kelvin model.The exact algebraic equation for a beam on elastic supports (or elastic Winkler foundation has been derived for quartic differential equation.The paper presents a number of exact algebraic equations which are equivalent to differential equations for transverse-longitudinal bending calculation of frame structures without and with due account of viscoelastic material properties when lateral and longitudinal loads are applied in the form of  impulses with any periods of their duration and any interchangeability. 

  12. Statistical theory of breakup reactions

    Energy Technology Data Exchange (ETDEWEB)

    Bertulani, Carlos A., E-mail: carlos.bertulani@tamuc.edu [Department of Physics and Astronomy, Texas A and M University-Commerce, Commerce, TX (United States); Descouvemont, Pierre, E-mail: pdesc@ulb.ac.be [Physique Nucleaire Theorique et Physique Mathematique, Universite Libre de Bruxelles (ULB), Brussels (Belgium); Hussein, Mahir S., E-mail: hussein@if.usp.br [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil). Instituto de Estudos Avancados

    2014-07-01

    We propose an alternative for Coupled-Channels calculations with loosely bound exotic nuclei (CDCC), based on the the Random Matrix Model of the statistical theory of nuclear reactions. The coupled channels equations are divided into two sets. The first set, described by the CDCC, and the other set treated with RMT. The resulting theory is a Statistical CDCC (CDCC{sub s}), able in principle to take into account many pseudo channels. (author)

  13. Multiple-shock initiation via statistical crack mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Dienes, J.K.; Kershner, J.D.

    1998-12-31

    Statistical Crack Mechanics (SCRAM) is a theoretical approach to the behavior of brittle materials that accounts for the behavior of an ensemble of microcracks, including their opening, shear, growth, and coalescence. Mechanical parameters are based on measured strain-softening behavior. In applications to explosive and propellant sensitivity it is assumed that closed cracks act as hot spots, and that the heating due to interfacial friction initiates reactions which are modeled as one-dimensional heat flow with an Arrhenius source term, and computed in a subscale grid. Post-ignition behavior of hot spots is treated with the burn model of Ward, Son and Brewster. Numerical calculations using SCRAM-HYDROX are compared with the multiple-shock experiments of Mulford et al. in which the particle velocity in PBX 9501 is measured with embedded wires, and reactions are initiated and quenched.

  14. Statistical assessment of numerous Monte Carlo tallies

    International Nuclear Information System (INIS)

    Kiedrowski, Brian C.; Solomon, Clell J.

    2011-01-01

    Four tests are developed to assess the statistical reliability of collections of tallies that number in thousands or greater. To this end, the relative-variance density function is developed and its moments are studied using simplified, non-transport models. The statistical tests are performed upon the results of MCNP calculations of three different transport test problems and appear to show that the tests are appropriate indicators of global statistical quality. (author)

  15. Evaluation of forensic DNA mixture evidence: protocol for evaluation, interpretation, and statistical calculations using the combined probability of inclusion.

    Science.gov (United States)

    Bieber, Frederick R; Buckleton, John S; Budowle, Bruce; Butler, John M; Coble, Michael D

    2016-08-31

    The evaluation and interpretation of forensic DNA mixture evidence faces greater interpretational challenges due to increasingly complex mixture evidence. Such challenges include: casework involving low quantity or degraded evidence leading to allele and locus dropout; allele sharing of contributors leading to allele stacking; and differentiation of PCR stutter artifacts from true alleles. There is variation in statistical approaches used to evaluate the strength of the evidence when inclusion of a specific known individual(s) is determined, and the approaches used must be supportable. There are concerns that methods utilized for interpretation of complex forensic DNA mixtures may not be implemented properly in some casework. Similar questions are being raised in a number of U.S. jurisdictions, leading to some confusion about mixture interpretation for current and previous casework. Key elements necessary for the interpretation and statistical evaluation of forensic DNA mixtures are described. Given the most common method for statistical evaluation of DNA mixtures in many parts of the world, including the USA, is the Combined Probability of Inclusion/Exclusion (CPI/CPE). Exposition and elucidation of this method and a protocol for use is the focus of this article. Formulae and other supporting materials are provided. Guidance and details of a DNA mixture interpretation protocol is provided for application of the CPI/CPE method in the analysis of more complex forensic DNA mixtures. This description, in turn, should help reduce the variability of interpretation with application of this methodology and thereby improve the quality of DNA mixture interpretation throughout the forensic community.

  16. OsB 2 and RuB 2, ultra-incompressible, hard materials: First-principles electronic structure calculations

    Science.gov (United States)

    Chiodo, S.; Gotsis, H. J.; Russo, N.; Sicilia, E.

    2006-07-01

    Recently it has been reported that osmium diboride has an unusually large bulk modulus combined with high hardness, and consequently is a most interesting candidate as an ultra-incompressible and hard material. The electronic and structural properties of the transition metal diborides OsB 2 and RuB 2 have been calculated within the local density approximation (LDA). It is shown that the high hardness is the result of covalent bonding between transition metal d states and boron p states in the orthorhombic structure.

  17. Quantifying scenarios to check statistical procedures

    International Nuclear Information System (INIS)

    Beetle, T.M.

    1976-01-01

    Ways of diverting nuclear material are presented in a form that reflects the effects of the diversions on a select set of statistical accounting procedures. Twelve statistics are examined for changes in mean values under sixty diversion scenarios. Several questions about the statistics are answered using a table of quantification results. Findings include a smallest, proper subset of the set of statistics which has one or more changed mean values under each of the diversion scenarios

  18. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  19. A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses.

    Science.gov (United States)

    Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis

    2016-07-01

    A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Efficient Finite Element Calculation of Nγ

    DEFF Research Database (Denmark)

    Clausen, Johan; Damkilde, Lars; Krabbenhøft, K.

    2007-01-01

    This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing.......This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing....

  1. Weibull statistical analysis of Krouse type bending fatigue of nuclear materials

    Energy Technology Data Exchange (ETDEWEB)

    Haidyrah, Ahmed S., E-mail: ashdz2@mst.edu [Nuclear Engineering, Missouri University of Science & Technology, 301 W. 14th, Rolla, MO 65409 (United States); Nuclear Science Research Institute, King Abdulaziz City for Science and Technology (KACST), P.O. Box 6086, Riyadh 11442 (Saudi Arabia); Newkirk, Joseph W. [Materials Science & Engineering, Missouri University of Science & Technology, 1440 N. Bishop Ave, Rolla, MO 65409 (United States); Castaño, Carlos H. [Nuclear Engineering, Missouri University of Science & Technology, 301 W. 14th, Rolla, MO 65409 (United States)

    2016-03-15

    A bending fatigue mini-specimen (Krouse-type) was used to study the fatigue properties of nuclear materials. The objective of this paper is to study fatigue for Grade 91 ferritic-martensitic steel using a mini-specimen (Krouse-type) suitable for reactor irradiation studies. These mini-specimens are similar in design (but smaller) to those described in the ASTM B593 standard. The mini specimen was machined by waterjet and tested as-received. The bending fatigue machine was modified to test the mini-specimen with a specially designed adapter. The cycle bending fatigue behavior of Grade 91 was studied under constant deflection. The S–N curve was created and mean fatigue life was analyzed using mean fatigue life. In this study, the Weibull function was predicted probably for high stress to low stress at 563, 310 and 265 MPa. The commercial software Minitab 17 was used to calculate the distribution of fatigue life under different stress levels. We have used 2 and 3- parameters Weibull analysis to introduce the probability of failure. The plots indicated that the 3- parameter Weibull distribution fits the data well.

  2. Weibull statistical analysis of Krouse type bending fatigue of nuclear materials

    International Nuclear Information System (INIS)

    Haidyrah, Ahmed S.; Newkirk, Joseph W.; Castaño, Carlos H.

    2016-01-01

    A bending fatigue mini-specimen (Krouse-type) was used to study the fatigue properties of nuclear materials. The objective of this paper is to study fatigue for Grade 91 ferritic-martensitic steel using a mini-specimen (Krouse-type) suitable for reactor irradiation studies. These mini-specimens are similar in design (but smaller) to those described in the ASTM B593 standard. The mini specimen was machined by waterjet and tested as-received. The bending fatigue machine was modified to test the mini-specimen with a specially designed adapter. The cycle bending fatigue behavior of Grade 91 was studied under constant deflection. The S–N curve was created and mean fatigue life was analyzed using mean fatigue life. In this study, the Weibull function was predicted probably for high stress to low stress at 563, 310 and 265 MPa. The commercial software Minitab 17 was used to calculate the distribution of fatigue life under different stress levels. We have used 2 and 3- parameters Weibull analysis to introduce the probability of failure. The plots indicated that the 3- parameter Weibull distribution fits the data well.

  3. Fuel rod design by statistical methods for MOX fuel

    International Nuclear Information System (INIS)

    Heins, L.; Landskron, H.

    2000-01-01

    Statistical methods in fuel rod design have received more and more attention during the last years. One of different possible ways to use statistical methods in fuel rod design can be described as follows: Monte Carlo calculations are performed using the fuel rod code CARO. For each run with CARO, the set of input data is modified: parameters describing the design of the fuel rod (geometrical data, density etc.) and modeling parameters are randomly selected according to their individual distributions. Power histories are varied systematically in a way that each power history of the relevant core management calculation is represented in the Monte Carlo calculations with equal frequency. The frequency distributions of the results as rod internal pressure and cladding strain which are generated by the Monte Carlo calculation are evaluated and compared with the design criteria. Up to now, this methodology has been applied to licensing calculations for PWRs and BWRs, UO 2 and MOX fuel, in 3 countries. Especially for the insertion of MOX fuel resulting in power histories with relatively high linear heat generation rates at higher burnup, the statistical methodology is an appropriate approach to demonstrate the compliance of licensing requirements. (author)

  4. Theory-inspired development of organic electro-optic materials

    Energy Technology Data Exchange (ETDEWEB)

    Dalton, Larry R., E-mail: dalton@chem.washington.ed [Department of Chemistry, Bagley Hall 202D, Box 351700, University of Washington, Seattle, Washington 98195-1700 (United States); Department of Electrical Engineering, Bagley Hall 202D, Box 351700, University of Washington, Seattle, Washington 98195-1700 (United States)

    2009-11-30

    Real-time, time-dependent density functional theory (RTTDDFT) and pseudo-atomistic Monte Carlo-molecular dynamics (PAMCMD) calculations have been used in a correlated manner to achieve quantitative definition of structure/function relationships necessary for the optimization of electro-optic activity in organic materials. Utilizing theoretical guidance, electro-optic coefficients (at telecommunication wavelengths) have been increased to 500 pm/V while keeping optical loss to less than 2 dB/cm. RTTDDFT affords the advantage of permitting explicit treatment of time-dependent electric fields, both applied fields and internal fields. This modification has permitted the quantitative simulation of the variation of linear and nonlinear optical properties of chromophores and the electro-optic activity of materials with optical frequency and dielectric permittivity. PAMCMD statistical mechanical calculations have proven an effective means of treating the full range of spatially-anisotropic intermolecular electrostatic interactions that play critical roles in defining the degree of noncentrosymmetric order that is achieved by electric field poling of organic electro-optic materials near their glass transition temperatures. New techniques have been developed for the experimental characterization of poling-induced acentric order including a modification of variable angle polarization absorption spectroscopy (VAPAS) permitting a meaningful correlation of theoretical and experimental data related to poling-induced order for a variety of complex organic electro-optic materials.

  5. Theory-inspired development of organic electro-optic materials

    International Nuclear Information System (INIS)

    Dalton, Larry R.

    2009-01-01

    Real-time, time-dependent density functional theory (RTTDDFT) and pseudo-atomistic Monte Carlo-molecular dynamics (PAMCMD) calculations have been used in a correlated manner to achieve quantitative definition of structure/function relationships necessary for the optimization of electro-optic activity in organic materials. Utilizing theoretical guidance, electro-optic coefficients (at telecommunication wavelengths) have been increased to 500 pm/V while keeping optical loss to less than 2 dB/cm. RTTDDFT affords the advantage of permitting explicit treatment of time-dependent electric fields, both applied fields and internal fields. This modification has permitted the quantitative simulation of the variation of linear and nonlinear optical properties of chromophores and the electro-optic activity of materials with optical frequency and dielectric permittivity. PAMCMD statistical mechanical calculations have proven an effective means of treating the full range of spatially-anisotropic intermolecular electrostatic interactions that play critical roles in defining the degree of noncentrosymmetric order that is achieved by electric field poling of organic electro-optic materials near their glass transition temperatures. New techniques have been developed for the experimental characterization of poling-induced acentric order including a modification of variable angle polarization absorption spectroscopy (VAPAS) permitting a meaningful correlation of theoretical and experimental data related to poling-induced order for a variety of complex organic electro-optic materials.

  6. Monte Carlo dose calculations and radiobiological modelling: analysis of the effect of the statistical noise of the dose distribution on the probability of tumour control

    International Nuclear Information System (INIS)

    Buffa, Francesca M.

    2000-01-01

    The aim of this work is to investigate the influence of the statistical fluctuations of Monte Carlo (MC) dose distributions on the dose volume histograms (DVHs) and radiobiological models, in particular the Poisson model for tumour control probability (tcp). The MC matrix is characterized by a mean dose in each scoring voxel, d, and a statistical error on the mean dose, σ d ; whilst the quantities d and σ d depend on many statistical and physical parameters, here we consider only their dependence on the phantom voxel size and the number of histories from the radiation source. Dose distributions from high-energy photon beams have been analysed. It has been found that the DVH broadens when increasing the statistical noise of the dose distribution, and the tcp calculation systematically underestimates the real tumour control value, defined here as the value of tumour control when the statistical error of the dose distribution tends to zero. When increasing the number of energy deposition events, either by increasing the voxel dimensions or increasing the number of histories from the source, the DVH broadening decreases and tcp converges to the 'correct' value. It is shown that the underestimation of the tcp due to the noise in the dose distribution depends on the degree of heterogeneity of the radiobiological parameters over the population; in particular this error decreases with increasing the biological heterogeneity, whereas it becomes significant in the hypothesis of a radiosensitivity assay for single patients, or for subgroups of patients. It has been found, for example, that when the voxel dimension is changed from a cube with sides of 0.5 cm to a cube with sides of 0.25 cm (with a fixed number of histories of 10 8 from the source), the systematic error in the tcp calculation is about 75% in the homogeneous hypothesis, and it decreases to a minimum value of about 15% in a case of high radiobiological heterogeneity. The possibility of using the error on the

  7. Statistical fluctuation phenomenon of early growth fission chain

    International Nuclear Information System (INIS)

    Zheng Chun; Song Lingli

    2008-01-01

    The early growth of neutron population within a supercritical system of fissile material is of a statistical nature and may depart significantly from the average time dependence neutron population. The probability of a source neutron sponsoring a persistent fission chain was considered for a supercritical system. Then the probability distribution in time of the neutron population reaching a preset level was deduced based on the probability P(n,t) of n neutron at time t. By combing the above two probabilities, the probability that at time t after the system reached critical there were no neutron in the system was derived. The P(t) of Godiva neutron excursion at supercritical, and the pre-burst probability of BARS were calculated by this model, and were found agree with the experiment result. (authors)

  8. Jsub(Ic)-testing of A-533 B - statistical evaluation of some different testing techniques

    International Nuclear Information System (INIS)

    Nilsson, F.

    1978-01-01

    The purpose of the present study was to compare statistically some different methods for the evaluation of fracture toughness of the nuclear reactor material A-533 B. Since linear elastic fracture mechanics is not applicable to this material at the interesting temperature (275 0 C), the so-called Jsub(Ic) testing method was employed. Two main difficulties are inherent in this type of testing. The first one is to determine the quantity J as a function of the deflection of the three-point bend specimens used. Three different techniques were used, the first two based on the experimentally observed input of energy to the specimen and the third employing finite element calculations. The second main problem is to determine the point when crack growth begins. For this, two methods were used, a direct electrical method and the indirect R-curve method. A total of forty specimens were tested at two laboratories. No statistically significant different results were obtained from the respective laboratories. The three methods of calculating J yielded somewhat different results, although the discrepancy was small. Also the two methods of determination of the growth initiation point yielded consistent results. The R-curve method, however, exhibited a larger uncertainty as measured by the standard deviation. The resulting Jsub(Ic) value also agreed well with earlier presented results. The relative standard deviation was of the order of 25%, which is quite small for this type of experiment. (author)

  9. Statistics Online Computational Resource for Education

    Science.gov (United States)

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  10. Calculation of multiplication factors regarding criticality aiming at the storage of fissile material

    International Nuclear Information System (INIS)

    Lima Barros, M. de.

    1982-04-01

    The multiplication factors of several systems with low enrichment, 3,5% and 3,2% in the isotope 235 U, aiming at the storage of fuel of ANGRA-I and ANGRA II, through the method of Monte Carlo, by the computacional code KENO-IV and the library of section of cross Hansen - Roach with 16 groups of energy. The method of Monte Carlo is specially suitable to the calculation of the factor of multiplication, because it is one of the most acurate models of solution and allows the description of complex tridimensional systems. Various tests of sensibility of this method have been done in order to present the most convenient way of working with KENO-IV code. The safety on criticality of stores of fissile material of the 'Fabrica de Elementos Combustiveis ', has been analyzed through the method of Monte Carlo. (Author) [pt

  11. Semianalytical and Seminumerical Calculations of Optimum Material Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Gunnar

    1963-06-15

    Perturbation theory applied to the multigroup diffusion equations gives a general condition for optimum distribution of reactor materials. A certain function of the material densities and the fluxes, here called the W (eight) function, must thus be constant where the variable material density is larger than zero if changes in this density affect only the group constants where the changes occur. The weight function is, however, generally a complicated function and complete solutions have therefore previously been presented only for the special case when constant weight function implies constant thermal flux. It is demonstrated that the condition of constant weight function can be used together with well known methods for numerical solution of the multigroup diffusion equations to obtain optimum material distributions also when the thermal flux varies over the core. Solution of the minimum fuel mass problem for two reflected reactors thus shows that an effective reflector such as D{sub 2}O gives a peak in the optimum fuel distribution at the core-reflector interface, while an ineffective reflector such as a breeder blanket or a steel tank wall 'pushes' the fuel away from the strongly absorbing zone. It is also interesting to compare the effective reflector case with analytically obtained solutions corresponding to flat power density, flat thermal flux and flat fuel density.

  12. IEEE Std 101-1987: IEEE guide for the statistical analysis of thermal life test data

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This revision of IEEE Std 101-1972 describes statistical analyses for data from thermally accelerated aging tests. It explains the basis and use of statistical calculations for an engineer or scientist. Accelerated test procedures usually call for a number of specimens to be aged at each of several temperatures appreciably above normal operating temperatures. High temperatures are chosen to produce specimen failures (according to specified failure criteria) in typically one week to one year. The test objective is to determine the dependence of median life on temperature from the data, and to estimate, by extrapolation, the median life to be expected at service temperature. This guide presents methods for analyzing such data and for comparing test data on different materials

  13. Use of nuclear reaction models in cross section calculations

    International Nuclear Information System (INIS)

    Grimes, S.M.

    1975-03-01

    The design of fusion reactors will require information about a large number of neutron cross sections in the MeV region. Because of the obvious experimental difficulties, it is probable that not all of the cross sections of interest will be measured. Current direct and pre-equilibrium models can be used to calculate non-statistical contributions to neutron cross sections from information available from charged particle reaction studies; these are added to the calculated statistical contribution. Estimates of the reliability of such calculations can be derived from comparisons with the available data. (3 tables, 12 figures) (U.S.)

  14. Calculation of Tajima's D and other neutrality test statistics from low depth next-generation sequencing data

    DEFF Research Database (Denmark)

    Korneliussen, Thorfinn Sand; Moltke, Ida; Albrechtsen, Anders

    2013-01-01

    A number of different statistics are used for detecting natural selection using DNA sequencing data, including statistics that are summaries of the frequency spectrum, such as Tajima's D. These statistics are now often being applied in the analysis of Next Generation Sequencing (NGS) data. Howeve......, estimates of frequency spectra from NGS data are strongly affected by low sequencing coverage; the inherent technology dependent variation in sequencing depth causes systematic differences in the value of the statistic among genomic regions....

  15. Computer calculation of neutron cross sections with Hauser-Feshbach code STAPRE incorporating the hybrid pre-compound emission model

    International Nuclear Information System (INIS)

    Ivascu, M.

    1983-10-01

    Computer codes incorporating advanced nuclear models (optical, statistical and pre-equilibrium decay nuclear reaction models) were used to calculate neutron cross sections needed for fusion reactor technology. The elastic and inelastic scattering (n,2n), (n,p), (n,n'p), (n,d) and (n,γ) cross sections for stable molybdenum isotopes Mosup(92,94,95,96,97,98,100) and incident neutron energy from about 100 keV or a threshold to 20 MeV were calculated using the consistent set of input parameters. The hydrogen production cross section which determined the radiation damage in structural materials of fusion reactors can be simply deduced from the presented results. The more elaborated microscopic models of nuclear level density are required for high accuracy calculations

  16. Statistical mechanics and field theory

    International Nuclear Information System (INIS)

    Samuel, S.A.

    1979-05-01

    Field theory methods are applied to statistical mechanics. Statistical systems are related to fermionic-like field theories through a path integral representation. Considered are the Ising model, the free-fermion model, and close-packed dimer problems on various lattices. Graphical calculational techniques are developed. They are powerful and yield a simple procedure to compute the vacuum expectation value of an arbitrary product of Ising spin variables. From a field theorist's point of view, this is the simplest most logical derivation of the Ising model partition function and correlation functions. This work promises to open a new area of physics research when the methods are used to approximate unsolved problems. By the above methods a new model named the 128 pseudo-free vertex model is solved. Statistical mechanics intuition is applied to field theories. It is shown that certain relativistic field theories are equivalent to classical interacting gases. Using this analogy many results are obtained, particularly for the Sine-Gordon field theory. Quark confinement is considered. Although not a proof of confinement, a logical, esthetic, and simple picture is presented of how confinement works. A key ingredient is the insight gained by using an analog statistical system consisting of a gas of macromolecules. This analogy allows the computation of Wilson loops in the presence of topological vortices and when symmetry breakdown occurs in the topological quantum number. Topological symmetry breakdown calculations are placed on approximately the same level of rigor as instanton calculations. The picture of confinement that emerges is similar to the dual Meissner type advocated by Mandelstam. Before topological symmetry breakdown, QCD has monopoles bound linearly together by three topological strings. Topological symmetry breakdown corresponds to a new phase where these monopoles are liberated. It is these liberated monopoles that confine quarks. 64 references

  17. Neutron spectra calculation in material in order to compute irradiation damage

    International Nuclear Information System (INIS)

    Dupont, C.; Gonnord, J.; Le Dieu de Ville, A.; Nimal, J.C.; Totth, B.

    1982-01-01

    This short presentation will be on neutron spectra calculation methods in order to compute the damage rate formation in irradiated structure. Three computation schemes are used in the French C.E.A.: (1) 3-dimensional calculations using the line of sight attenuation method (MERCURE IV code), the removal cross section being obtained from an adjustment on a 1-dimensional transport calculation with the discrete ordinate code ANISN; (2) 2-dimensional calculations using the discrete ordinates method (DOT 3.5 code), 20 to 30 group library obtained by collapsing the 100 group a library on fluxes computed by ANISN; (3) 3-dimensional calculations using the Monte Carlo method (TRIPOLI system). The cross sections which originally came from UKNDL 73 and ENDF/B3 are now processed from ENDF B IV. (author)

  18. Radiation-damage calculations with NJOY

    International Nuclear Information System (INIS)

    MacFarlane, R.E.; Muir, D.W.; Mann, F.W.

    1983-01-01

    Atomic displacement, gas production, transmutation, and nuclear heating can all be calculated with the NJOY nuclear data processing system using evaluated data in ENDF/B format. Using NJOY helps assure consistency between damage cross sections and those used for transport, and NJOY provides convenient interface formats for linking data to application codes. Unique features of the damage calculation include a simple momentum balance treatment for radiative capture and a new model for (n, particle) reactions based on statistical model calculations. Sample results for iron and nickel are given and compared with the results of other methods

  19. Statistical contribution in the giant multipolar resonance decay in hevay nuclei

    International Nuclear Information System (INIS)

    Teruya, N.

    1986-01-01

    Statistical calculations are made for the decay in the electric monopole giant resonance in 208 Pb and electric dipole giant resonance in 209 Bi, using the Hauser-Feshbach formalism. Calculations are done using the experimental energy levels of the corresponding residual nuclei. The particle-vibrator model is used for those experimental levels without spin and parity determination. The influence of different parametrizations of the optical potential in the statistical calculation result is also studied. (L.C.) [pt

  20. Performance of dental impression materials: Benchmarking of materials and techniques by three-dimensional analysis.

    Science.gov (United States)

    Rudolph, Heike; Graf, Michael R S; Kuhn, Katharina; Rupf-Köhler, Stephanie; Eirich, Alfred; Edelmann, Cornelia; Quaas, Sebastian; Luthardt, Ralph G

    2015-01-01

    Among other factors, the precision of dental impressions is an important and determining factor for the fit of dental restorations. The aim of this study was to examine the three-dimensional (3D) precision of gypsum dies made using a range of impression techniques and materials. Ten impressions of a steel canine were fabricated for each of the 24 material-method-combinations and poured with type 4 die stone. The dies were optically digitized, aligned to the CAD model of the steel canine, and 3D differences were calculated. The results were statistically analyzed using one-way analysis of variance. Depending on material and impression technique, the mean values had a range between +10.9/-10.0 µm (SD 2.8/2.3) and +16.5/-23.5 µm (SD 11.8/18.8). Qualitative analysis using colorcoded graphs showed a characteristic location of deviations for different impression techniques. Three-dimensional analysis provided a comprehensive picture of the achievable precision. Processing aspects and impression technique were of significant influence.

  1. Statistical Hadronization and Holography

    DEFF Research Database (Denmark)

    Bechi, Jacopo

    2009-01-01

    In this paper we consider some issues about the statistical model of the hadronization in a holographic approach. We introduce a Rindler like horizon in the bulk and we understand the string breaking as a tunneling event under this horizon. We calculate the hadron spectrum and we get a thermal...

  2. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver; Haranczyk, Maciej; Smit, Berend

    2012-05-08

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CH$_{4}$ and CO$_{2}$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.

  3. Statistical density of nuclear excited states

    Directory of Open Access Journals (Sweden)

    V. M. Kolomietz

    2015-10-01

    Full Text Available A semi-classical approximation is applied to the calculations of single-particle and statistical level densities in excited nuclei. Landau's conception of quasi-particles with the nucleon effective mass m* < m is used. The approach provides the correct description of the continuum contribution to the level density for realistic finite-depth potentials. It is shown that the continuum states does not affect significantly the thermodynamic calculations for sufficiently small temperatures T ≤ 1 MeV but reduce strongly the results for the excitation energy at high temperatures. By use of standard Woods - Saxon potential and nucleon effective mass m* = 0.7m the A-dependency of the statistical level density parameter K was evaluated in a good qualitative agreement with experimental data.

  4. Non-Poissonian photon statistics from macroscopic photon cutting materials

    NARCIS (Netherlands)

    De Jong, Mathijs; Meijerink, A; Rabouw, Freddy T.

    2017-01-01

    In optical materials energy is usually extracted only from the lowest excited state, resulting in fundamental energy-efficiency limits such as the Shockley-Queisser limit for single-junction solar cells. Photon-cutting materials provide a way around such limits by absorbing high-energy photons and

  5. Earthquake statistics and plastic events in soft-glassy materials

    NARCIS (Netherlands)

    Benzi, Roberto; Kumar, Pinaki; Toschi, Federico; Trampert, Jeannot

    2016-01-01

    We propose a new approach for generating synthetic earthquakes based on the physics of soft glasses. The continuum approach produces yield-stress materials based on Lattice–Boltzmann simulations. We show that if the material is stimulated below yield stress, plastic events occur, which have strong

  6. Precalculus teachers' perspectives on using graphing calculators: an example from one curriculum

    Science.gov (United States)

    Karadeniz, Ilyas; Thompson, Denisse R.

    2018-01-01

    Graphing calculators are hand-held technological tools currently used in mathematics classrooms. Teachers' perspectives on using graphing calculators are important in terms of exploring what teachers think about using such technology in advanced mathematics courses, particularly precalculus courses. A descriptive intrinsic case study was conducted to analyse the perspectives of 11 teachers using graphing calculators with potential Computer Algebra System (CAS) capability while teaching Functions, Statistics, and Trigonometry, a precalculus course for 11th-grade students developed by the University of Chicago School Mathematics Project. Data were collected from multiple sources as part of a curriculum evaluation study conducted during the 2007-2008 school year. Although all teachers were using the same curriculum that integrated CAS into the instructional materials, teachers had mixed views about the technology. Graphing calculator features were used much more than CAS features, with many teachers concerned about the use of CAS because of pressures from external assessments. In addition, several teachers found it overwhelming to learn a new technology at the same time they were learning a new curriculum. The results have implications for curriculum developers and others working with teachers to update curriculum and the use of advanced technologies simultaneously.

  7. Mathematical Tools for Discovery of Nanoporous Materials for Energy Applications

    International Nuclear Information System (INIS)

    Haranczyk, M; Martin, R L

    2015-01-01

    Porous materials such as zeolites and metal organic frameworks have been of growing importance as materials for energy-related applications such as CO 2 capture, hydrogen and methane storage, and catalysis. The current state-of-the-art molecular simulations allow for accurate in silico prediction of materials' properties but the computational cost of such calculations prohibits their application in the characterisation of very large sets of structures, which would be required to perform brute-force screening. Our work focuses on the development of novel methodologies to efficiently characterize and explore this complex materials space. In particular, we have been developing algorithms and tools for enumeration and characterisation of porous material databases as well as efficient screening approaches. Our methodology represents a ensemble of mathematical methods. We have used Voronoi tessellation-based techniques to enable high-throughput structure characterisation, statistical techniques to perform comparison and screening, and continuous optimisation to design materials. This article outlines our developments in material design

  8. Extrusion product defects: a statistical study

    International Nuclear Information System (INIS)

    Qamar, S.Z.; Arif, A.F.M.; Sheikh, A.K.

    2003-01-01

    In any manufacturing environment, defects resulting in rework or rejection are directly related to product cost and quality, and indirectly linked with process, tooling and product design. An analysis of product defects is therefore integral to any attempt at improving productivity, efficiency and quality. Commercial aluminum extrusion is generally a hot working process and consists of a series of different but integrated operations: billet preheating and sizing, die set and container preheating, billet loading and deformation, product sizing and stretching/roll-correction, age hardening, and painting/anodizing. Product defects can be traced back to problems in billet material and preparation, die and die set design and maintenance, process variable aberrations (ram speed, extrusion pressure, container temperature, etc), and post-extrusion treatment (age hardening, painting/anodizing, etc). The current paper attempts to analyze statistically the product defects commonly encountered in a commercial hot aluminum extrusion setup. Real-world rejection data, covering a period of nine years, has been researched and collected from a local structural aluminum extrusion facility. Rejection probabilities have been calculated for all the defects studied. The nine-year rejection data have been statistically analyzed on the basis of (i) an overall breakdown of defects, (ii) year-wise rejection behavior, (iii) breakdown of defects in each of three cost centers: press, anodizing, and painting. (author)

  9. Development of a novel algorithm and production of new nuclear data libraries for the treatment of sequential (x,n) reactions in fusion material activation calculations

    International Nuclear Information System (INIS)

    Cierjacks, S.W.; Oblozinsky, P.; Kelzenberg, S.; Rzehorz, B.

    1993-01-01

    A new algorithm and three major nuclear data libraries were developed for the kinematically complete treatment of sequential (x,n) reactions in fusion material activation calculations. The new libraries include data for virtually all isotopes with Z ≤ 84 (A ≤ 210) and half-lives exceeding 1 day; primary neutron energies E n 3 He, and α with energies E x < 24 MeV. While production cross sections of charged particles for primary (n,x) reactions can be deduced from the European activation file, the KFKSPEC data file was created for the corresponding normalized charged-particle spectra. The second data file, KFKXN, contains cross sections for secondary (x,n) reactions. The third data file, KFKSTOP, has a complete set of differential ranges for all five aforementioned light charged particles and all elements from hydrogen to uranium. The KFKSPEC and KFKXN libraries are based essentially on nuclear model calculations using the statistical evaporation model superimposed with the pre-equilibrium contribution as implemented in the Lawrence Livermore National Laboratory ALICE code. The KFKSPEC library includes 633 isotopes, of which 55 are in their isomeric states, and contains 63,300 spectra of the (n,x) type with almost 1.5 million data points. The KFKXN library also includes 633 isotopes and contains all (x,n) and partly (x,2n) cross sections for 4431 reactions with ∼ 106,000 data points. The KFKSTOP library is considered complete and has 11,040 data points. 42 refs., 2 figs., 4 tabs

  10. Rational design of organic electro-optic materials

    CERN Document Server

    Dalton, L R

    2003-01-01

    Quantum mechanical calculations are used to optimize the molecular first hyperpolarizability of organic chromophores and statistical mechanical calculations are used to optimize the translation of molecular hyperpolarizability to macroscopic electro-optic activity (to values of greater than 100 pm V sup - sup 1 at telecommunications wavelengths). Macroscopic material architectures are implemented exploiting new concepts in nanoscale architectural engineering. Multi-chromophore-containing dendrimers and dendronized polymers not only permit optimization of electro-optic activity but also of auxiliary properties including optical loss (both absorption and scattering), thermal and photochemical stability and processability. New reactive ion etching and photolithographic techniques permit the fabrication of three-dimensional optical circuitry and the integration of that circuitry with semiconductor very-large-scale integration electronics and silica fibre optics. Electro-optic devices have been fabricated exploiti...

  11. Uranium tailings reference materials

    International Nuclear Information System (INIS)

    Smith, C.W.; Steger, H.F.; Bowman, W.S.

    1984-01-01

    Samples of uranium tailings from Bancroft and Elliot Lake, Ontario, and from Beaverlodge and Rabbit Lake, Saskatchewan, have been prepared as compositional reference materials at the request of the National Uranium Tailings Research Program. The four samples, UTS-1 to UTS-4, were ground to minus 104 μm, each mixed in one lot and bottled in 200-g units for UTS-1 to UTS-3 and in 100-g units for UTS-4. The materials were tested for homogeneity with respect to uranium by neutron activation analysis and to iron by an acid-decomposition atomic absorption procedure. In a free choice analytical program, 18 laboratories contributed results for one or more of total iron, titanium, aluminum, calcium, barium, uranium, thorium, total sulphur, and sulphate for all four samples, and for nickel and arsenic in UTS-4 only. Based on a statistical analysis of the data, recommended values were assigned to all elements/constituents, except for sulphate in UTS-3 and nickel in UTS-4. The radioactivity of thorium-230, radium-226, lead-210, and polonium-210 in UTS-1 to UTS-4 and of thorium-232, radium-228, and thorium-228 in UTS-1 and UTS-2 was determined in a radioanalytical program composed of eight laboratories. Recommended values for the radioactivities and associated parameters were calculated by a statistical treatment of the results

  12. Comparison of statistical evaluation of criticality calculations for reactors VENUS-F and ALFRED

    Directory of Open Access Journals (Sweden)

    Janczyszyn Jerzy

    2017-01-01

    Full Text Available Limitations of correct evaluation of keff in Monte Carlo calculations, claimed in literature, apart from the nuclear data uncertainty, need to be addressed more thoroughly. Respective doubts concern: the proper number of discarded initial cycles, the sufficient number of neutrons in a cycle and the recognition and dealing with the keff bias. Calculations were performed to provide more information on these points with the use of the MCB code, solely for fast cores. We present applied methods and results, such as: calculation results for stability of variance, relation between standard deviation reported by MCNP and this from the dispersion of multiple independent keff values, second order standard deviations obtained from different numbers of grouped results. All obtained results for numbers of discarded initial cycles from 0 to 3000 were analysed leading for interesting conclusions.

  13. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...

  14. Dose-Response Calculator for ArcGIS

    Science.gov (United States)

    Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.

    2011-01-01

    The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.

  15. COMPARATIVE STATISTICAL ANALYSIS OF GENOTYPES’ COMBINING

    Directory of Open Access Journals (Sweden)

    V. Z. Stetsyuk

    2015-05-01

    The program provides the creation of desktop program complex for statistics calculations on a personal computer of doctor. Modern methods and tools for development of information systems were described to create program.

  16. Calculated LET-Spectrum of Antiprotons

    DEFF Research Database (Denmark)

    Bassler, Niels

    -LET components resulting from the annihilation. Though, the calculations of dose-averaged LET in the entry region may suggest that the RBE of antiprotons in the plateau region could significantly differ from unity. Materials and Methods Monte Carlo simulations using FLUKA were performed for calculating...

  17. Python Materials Genomics (pymatgen): A robust, open-source python library for materials analysis

    OpenAIRE

    Ong, Shyue Ping; Richards, William Davidson; Jain, Anubhav; Hautier, Geoffroy; Kocher, Michael; Cholia, Shreyas; Gunter, Dan; Chevrier, Vincent L.; Persson, Kristin A.; Ceder, Gerbrand

    2012-01-01

    We present the Python Materials Genomics (pymatgen) library, a robust, open-source Python library for materials analysis. A key enabler in high-throughput computational materials science efforts is a robust set of software tools to perform initial setup for the calculations (e.g., generation of structures and necessary input files) and post-calculation analysis to derive useful material properties from raw calculated data. The pymatgen library aims to meet these needs by (1) defining core Pyt...

  18. Introduction to statistics using interactive MM*Stat elements

    CERN Document Server

    Härdle, Wolfgang Karl; Rönz, Bernd

    2015-01-01

    MM*Stat, together with its enhanced online version with interactive examples, offers a flexible tool that facilitates the teaching of basic statistics. It covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). MM*Stat is also designed to help students rework class material independently and to promote comprehension with the help of additional examples. Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students’ knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical...

  19. A statistical procedure for the qualification of indoor dust

    International Nuclear Information System (INIS)

    Scapin, Valdirene O.; Scapin, Marcos A.; Ribeiro, Andreza P.; Sato, Ivone M.

    2009-01-01

    The materials science advance has contributed to the humanity. Notwithstanding, serious environmental and human health problems are often observed. Thereby, many worldwide researchers have focused their work to diagnose, assess and monitor several environmental systems. In this work, a statistical procedure (on a 0.05 significance level) that allows verifying if indoor dust samples have characteristics of soil/sediment is presented. Dust samples were collected from 69 residences using a domestic vacuum cleaner in four neighborhoods of the Sao Paulo metropolitan region, Brazil, between 2006 and 2008. The samples were sieved in the fractions of 150-75 (C), 75-63 (M) and <63 μm (F). The elemental concentrations were determined by X-ray fluorescence (WDXRF). Afterwards, the indoor samples results (group A) were compared to the group of 109 certificated reference materials, which included different kinds of geological matrices, such as clay, sediment, sand and sludge (group B) and to the continental crust values (group C). Initially, the Al/Si ratio was calculated for the groups (A, B, C). The variance analysis (ANOVA), followed by Tukey test, was used to find out if there was a significant difference between the concentration means of the considered groups. According to the statistical tests; the group B presented results that are considered different from others. The interquartile range (IQR) was used to detected outlier values. ANOVA was applied again and the results (p ≥ 0.05) showed equality between ratios means of the three groups. Accordingly, the results suggest that the indoor dust samples have characteristic of soil/sediment. The statistical procedure may be used as a tool to clear the information about contaminants in dust samples, since they have characteristic of soil and may be compared with values reported by environmental control organisms. (author)

  20. Representative volume size: A comparison of statistical continuum mechanics and statistical physics

    Energy Technology Data Exchange (ETDEWEB)

    AIDUN,JOHN B.; TRUCANO,TIMOTHY G.; LO,CHI S.; FYE,RICHARD M.

    1999-05-01

    In this combination background and position paper, the authors argue that careful work is needed to develop accurate methods for relating the results of fine-scale numerical simulations of material processes to meaningful values of macroscopic properties for use in constitutive models suitable for finite element solid mechanics simulations. To provide a definite context for this discussion, the problem is couched in terms of the lack of general objective criteria for identifying the size of the representative volume (RV) of a material. The objective of this report is to lay out at least the beginnings of an approach for applying results and methods from statistical physics to develop concepts and tools necessary for determining the RV size, as well as alternatives to RV volume-averaging for situations in which the RV is unmanageably large. The background necessary to understand the pertinent issues and statistical physics concepts is presented.

  1. Static analysis of material testing reactor cores:critical core calculations

    International Nuclear Information System (INIS)

    Nawaz, A. A.; Khan, R. F. H.; Ahmad, N.

    1999-01-01

    A methodology has been described to study the effect of number of fuel plates per fuel element on critical cores of Material Testing Reactors (MTR). When the number of fuel plates are varied in a fuel element by keeping the fuel loading per fuel element constant, the fuel density in the fuel plates varies. Due to this variation, the water channel width needs to be recalculated. For a given number of fuel plates, water channel width was determined by optimizing k i nfinity using a transport theory lattice code WIMS-D/4. The dimensions of fuel element and control fuel element were determined using this optimized water channel width. For the calculated dimensions, the critical cores were determined for the given number of fuel plates per fuel element by using three dimensional diffusion theory code CITATION. The optimization of water channel width gives rise to a channel width of 2.1 mm when the number of fuel plates is 23 with 290 g ''2''3''5U fuel loading which is the same as in the case of Pakistan Reactor-1 (PARR-1). Although the decrease in number of fuel element results in an increase in optimal water channel width but the thickness of standard fuel element (SFE) and control fuel element (CFE) decreases and it gives rise to compact critical and equilibrium cores. The criticality studies of PARR-1 are in good agreement with the predictions

  2. Radioactive cloud dose calculations

    International Nuclear Information System (INIS)

    Healy, J.W.

    1984-01-01

    Radiological dosage principles, as well as methods for calculating external and internal dose rates, following dispersion and deposition of radioactive materials in the atmosphere are described. Emphasis has been placed on analytical solutions that are appropriate for hand calculations. In addition, the methods for calculating dose rates from ingestion are discussed. A brief description of several computer programs are included for information on radionuclides. There has been no attempt to be comprehensive, and only a sampling of programs has been selected to illustrate the variety available

  3. Frequency Calculation For Loss Coolant Accident In The Nuclear Reactor

    International Nuclear Information System (INIS)

    Sony, DT

    1996-01-01

    LOCA as initiating event is engineering judgement, because it is rare condition. So, to determine LOCA frequency used be probability and statistic method. By probability and statistic method was estimated from size, weld, age, learning curve and quality, etc. it has been calculated for LOCA frequency in the simplified piping system model, especially estimates from size and weld factors. From calculation, LOCA frequency is 9,82.10 - 6/year

  4. Power calculations using exact data simulation: A useful tool for genetic study designs

    NARCIS (Netherlands)

    van der Sluis, S.; Dolan, C.V.; Neale, M.C.; Posthuma, D.

    2008-01-01

    Statistical power calculations constitute an essential first step in the planning of scientific studies. If sufficient summary statistics are available, power calculations are in principle straightforward and computationally light. In designs, which comprise distinct groups (e.g., MZ & DZ twins),

  5. Isotopic fractionation of NBS oxalic acid and its influence in the calculated age of materials

    International Nuclear Information System (INIS)

    Nehmi, V.A.

    1979-10-01

    The intensity of the isotopic fractionation during the oxidation of NBS oxalic acid to carbon dioxide was checked. 30 reactions of oxidation of NBS oxalic acid with potassium permanganate were made. The resultant isotopic composition of CO 2 has been determined with a mass-spectrometer. A conclusion has been reached that the average of Δ 13 C is - 18.9% o with variation between - 17.7 and - 21.2%o. For values of Δ 13 C equal to - 22.0%o, the calculated age with isotopic correction shows the following deviations in relation to non-corrected age: 4% for materials of 1,000 years and 0.3% for 20,000 years.(Author) [pt

  6. Impact initiation of explosives and propellants via statistical crack mechanics

    Science.gov (United States)

    Dienes, J. K.; Zuo, Q. H.; Kershner, J. D.

    2006-06-01

    A statistical approach has been developed for modeling the dynamic response of brittle materials by superimposing the effects of a myriad of microcracks, including opening, shear, growth and coalescence, taking as a starting point the well-established theory of penny-shaped cracks. This paper discusses the general approach, but in particular an application to the sensitivity of explosives and propellants, which often contain brittle constituents. We examine the hypothesis that the intense heating by frictional sliding between the faces of a closed crack during unstable growth can form a hot spot, causing localized melting, ignition, and fast burn of the reactive material adjacent to the crack. Opening and growth of a closed crack due to the pressure of burned gases inside the crack and interactions of adjacent cracks can lead to violent reaction, with detonation as a possible consequence. This approach was used to model a multiple-shock experiment by Mulford et al. [1993. Initiation of preshocked high explosives PBX-9404, PBX-9502, PBX-9501, monitored with in-material magnetic gauging. In: Proceedings of the 10th International Detonation Symposium, pp. 459-467] involving initiation and subsequent quenching of chemical reactions in a slab of PBX 9501 impacted by a two-material flyer plate. We examine the effects of crack orientation and temperature dependence of viscosity of the melt on the response. Numerical results confirm our theoretical finding [Zuo, Q.H., Dienes, J.K., 2005. On the stability of penny-shaped cracks with friction: the five types of brittle behavior. Int. J. Solids Struct. 42, 1309-1326] that crack orientation has a significant effect on brittle behavior, especially under compressive loading where interfacial friction plays an important role. With a reasonable choice of crack orientation and a temperature-dependent viscosity obtained from molecular dynamics calculations, the calculated particle velocities compare well with those measured using

  7. Physics-based statistical model and simulation method of RF propagation in urban environments

    Science.gov (United States)

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  8. A new statistical method for transfer coefficient calculations in the framework of the general multiple-compartment model of transport for radionuclides in biological systems.

    Science.gov (United States)

    Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P

    1999-10-01

    A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.

  9. A new statistical method for transfer coefficient calculations in the framework of the general multiple-compartment model of transport for radionuclides in biological systems

    International Nuclear Information System (INIS)

    Garcia, F.; Manso, M.V.; Rodriguez, O.; Mesa, J.; Arruda-Neto, J.D.T.; Helene, O.M.; Vanin, V.R.; Likhachev, V.P.; Pereira Filho, J.W.; Deppman, A.; Perez, G.; Guzman, F.; Camargo, S.P. de

    1999-01-01

    A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data. (author)

  10. Advanced nuclear data for radiation-damage calculations

    International Nuclear Information System (INIS)

    MacFarlane, R.E.; Foster, D.G. Jr.

    1983-01-01

    Accurate calculations of atomic displacement damage in materials exposed to neutrons require detailed spectra for primary recoil nuclei. Such data are not available from direct experimental measurements. Moreover, they cannot always be computed accurately starting from evaluated nuclear data libraries such as ENDF/B-V that were developed primarily for neutron transport applications, because these libraries lack detailed energy-and-angle distributions for outgoing charged particles. Fortunately, a new generation of nuclear model codes is now available that can be used to fill in the missing spectra. One example is the preequilibrium statistical-model code GNASH. For heating and damage applications, a supplementary code called RECOIL has been developed. RECOIL uses detailed reaction data from GNASH, together with angular distributions based on Kalbach-Mann systematics to compute the energy and angle distributions of recoil nuclei. The energy-angle distributions for recoil nuclei and outgoing particles are written out in the new ENDF/B File 6 format. The result is a complete set of nuclear data that can be used to calculate displacement-energy production, heat production, gas production, transmutation, and activation. Sample results for iron are given and compared to the results of conventional damage models such as those used in NJOY

  11. Assessing the reliability of calculated catalytic ammonia synthesis rates

    DEFF Research Database (Denmark)

    Medford, Andrew James; Wellendorff, Jess; Vojvodic, Aleksandra

    2014-01-01

    We introduce a general method for estimating the uncertainty in calculated materials properties based on density functional theory calculations. We illustrate the approach for a calculation of the catalytic rate of ammonia synthesis over a range of transition-metal catalysts. The correlation...... between errors in density functional theory calculations is shown to play an important role in reducing the predicted error on calculated rates. Uncertainties depend strongly on reaction conditions and catalyst material, and the relative rates between different catalysts are considerably better described...

  12. Summary of workshop 'Theory Meets Industry' - the impact of ab initio solid state calculations on industrial materials research

    International Nuclear Information System (INIS)

    Wimmer, E

    2008-01-01

    A workshop, 'Theory Meets Industry', was held on 12-14 June 2007 in Vienna, Austria, attended by a well balanced number of academic and industrial scientists from America, Europe, and Japan. The focus was on advances in ab initio solid state calculations and their practical use in industry. The theoretical papers addressed three dominant themes, namely (i) more accurate total energies and electronic excitations (ii) more complex systems, and (iii) more diverse and accurate materials properties. Hybrid functionals give some improvements in energies, but encounter difficulties for metallic systems. Quantum Monte Carlo methods are progressing, but no clear breakthrough is on the horizon. Progress in order-N methods is steady, as is the case for efficient methods for exploring complex energy hypersurfaces and large numbers of structural configurations. The industrial applications were dominated by materials issues in energy conversion systems, the quest for hydrogen storage materials, improvements of electronic and optical properties of microelectronic and display materials, and the simulation of reactions on heterogeneous catalysts. The workshop is a clear testimony that ab initio computations have become an industrial practice with increasingly recognized impact

  13. Computational simulation of the creep-rupture process in filamentary composite materials

    Science.gov (United States)

    Slattery, Kerry T.; Hackett, Robert M.

    1991-01-01

    A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.

  14. Statistics and integral experiments in the verification of LOCA calculations models

    International Nuclear Information System (INIS)

    Margolis, S.G.

    1978-01-01

    The LOCA (loss of coolant accident) is a hypothesized, low-probability accident used as a licensing basis for nuclear power plants. Computer codes which have been under development for at least a decade have been the principal tools used to assess the consequences of the hypothesized LOCA. Models exist in two versions. In EM's (Evaluation Models) the basic engineering calculations are constrained by a detailed set of assumptions spelled out in the Code of Federal Regulations (10 CFR 50, Appendix K). In BE Models (Best Estimate Models) the calculations are based on fundamental physical laws and available empirical correlations. Evaluation models are intended to have a pessimistic bias; Best Estimate Models are intended to be unbiased. Because evaluation models play a key role in reactor licensing, they must be conservative. A long-sought objective has been to assess this conservatism by combining Best Estimate Models with statisticallly established error bounds, based on experiment. Within the last few years, an extensive international program of LOCA experiments has been established to provide the needed data. This program has already produced millions of measurements of temperature, density, and flow and millions of more measurements are yet to come

  15. Introductory statistics for engineering experimentation

    CERN Document Server

    Nelson, Peter R; Coffin, Marie

    2003-01-01

    The Accreditation Board for Engineering and Technology (ABET) introduced a criterion starting with their 1992-1993 site visits that "Students must demonstrate a knowledge of the application of statistics to engineering problems." Since most engineering curricula are filled with requirements in their own discipline, they generally do not have time for a traditional two semesters of probability and statistics. Attempts to condense that material into a single semester often results in so much time being spent on probability that the statistics useful for designing and analyzing engineering/scientific experiments is never covered. In developing a one-semester course whose purpose was to introduce engineering/scientific students to the most useful statistical methods, this book was created to satisfy those needs. - Provides the statistical design and analysis of engineering experiments & problems - Presents a student-friendly approach through providing statistical models for advanced learning techniques - Cove...

  16. Theory, modeling and instrumentation for materials by design: Proceedings of workshop

    Energy Technology Data Exchange (ETDEWEB)

    Allen, R.E.; Cocke, D.L.; Eberhardt, J.J.; Wilson, A. (eds.)

    1984-01-01

    The following topics are contained in this volume: how can materials theory benefit from supercomputers and vice-versa; the materials of xerography; relationship between ab initio and semiempirical theories of electronic structure and renormalization group and the statistical mechanics of polymer systems; ab initio calculations of materials properties; metals in intimate contact; lateral interaction in adsorption: revelations from phase transitions; quantum model of thermal desorption and laser stimulated desorption; extended fine structure in appearance potential spectroscopy as a probe of solid surfaces; structural aspects of band offsets at heterojunction interfaces; multiconfigurational Green's function approach to quantum chemistry; wavefunctions and charge densities for defects in solids: a success for semiempirical theory; empirical methods for predicting the phase diagrams of intermetallic alloys; theoretical considerations regarding impurities in silicon and the chemisorption of simple molecules on Ni; improved Kohn-Sham exchange potential; structural stability calculations for films and crystals; semiempirical molecular orbital modeling of catalytic reactions including promoter effects; theoretical studies of chemical reactions: hydrolysis of formaldehyde; electronic structure calculations for low coverage adlayers; present status of the many-body problem; atomic scattering as a probe of physical adsorption; and, discussion of theoretical techniques in quantum chemistry and solid state physics.

  17. Comparison of calculated integral values using measured and calculated neutron spectra for fusion neutronics analyses

    International Nuclear Information System (INIS)

    Sekimoto, H.

    1987-01-01

    The kerma heat production density, tritum production density, and dose in a lithium-fluoride pile with a deuterium-tritum neutron source were calculated with a data processing code, UFO, from the pulse height distribution of a miniature NE213 neutron spectrometer, and compared with the values calculated with a Monte Carlo code, MORSE-CV. Both the UFO and MORSE-CV values agreed with the statistical error (less than 6%) of the MORSE-CV calculations, except for the outer-most point in the pile. The MORSE-CV values were slightly smaller than the UFO values for almost all cases, and this tendency increased with increasing distance from the neutron source

  18. Social indicators and other income statistics using the EUROMOD baseline: a comparison with Eurostat and National Statistics

    OpenAIRE

    Mantovani, Daniela; Sutherland, Holly

    2003-01-01

    This paper reports an exercise to validate EUROMOD output for 1998 by comparing income statistics calculated from the baseline micro-output with comparable statistics from other sources, including the European Community Household Panel. The main potential reasons for discrepancies are identified. While there are some specific national issues that arise, there are two main general points to consider in interpreting EUROMOD estimates of social indicators across EU member States: (a) the method ...

  19. Statistical approach for collaborative tests, reference material certification procedures

    International Nuclear Information System (INIS)

    Fangmeyer, H.; Haemers, L.; Larisse, J.

    1977-01-01

    The first part introduces the different aspects in organizing and executing intercomparison tests of chemical or physical quantities. It follows a description of a statistical procedure to handle the data collected in a circular analysis. Finally, an example demonstrates how the tool can be applied and which conclusion can be drawn of the results obtained

  20. Uncertainty analysis with statistically correlated failure data

    International Nuclear Information System (INIS)

    Modarres, M.; Dezfuli, H.; Roush, M.L.

    1987-01-01

    Likelihood of occurrence of the top event of a fault tree or sequences of an event tree is estimated from the failure probability of components that constitute the events of the fault/event tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. At present most fault tree calculations are based on uncorrelated component failure data. This chapter describes a methodology for assessing the probability intervals for the top event failure probability of fault trees or frequency of occurrence of event tree sequences when event failure data are statistically correlated. To estimate mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. Moment matching technique is used to obtain the probability distribution function of the top event through fitting the Johnson Ssub(B) distribution. The computer program, CORRELATE, was developed to perform the calculations necessary for the implementation of the method developed. (author)

  1. On China's energy intensity statistics: Toward a comprehensive and transparent indicator

    International Nuclear Information System (INIS)

    Wang Xin

    2011-01-01

    A transparent and comprehensive statistical system in China would provide an important basis for enabling a better understanding of the country. This paper focuses on energy intensity (EI), which is one of the most important indicators of China. It firstly reviews China's GDP and energy statistics, showing that China has made great improvements in recent years. The means by which EI data are released and adjusted are then explained. It shows that EI data releases do not provide complete data for calculating EI and constant GDP, which may reduce policy transparency and comprehensiveness. This paper then conducts an EI calculation method that is based on official sources and that respects the data availability of different data release times. It finds that, in general, China's EI statistics can be considered as reliable because most of the results generated by author's calculations match the figures in the official releases. However, two data biases were identified, which may necessitate supplementary information on related constant GDP values used in the official calculation of EI data. The paper concludes by proposing short- and long-term measures for improving EI statistics to provide a transparent and comprehensive EI indicator. - Highlights: → This paper examines data release and adjustment process of energy intensity (EI) target of China. → New insights on the comprehensiveness and transparency of EI data. → Potential data bias between author's calculation and official data due to lack of constant GDP data. → Proposition for improving short- and long-term EI statistical works.

  2. In silico environmental chemical science: properties and processes from statistical and computational modelling

    Energy Technology Data Exchange (ETDEWEB)

    Tratnyek, P. G.; Bylaska, Eric J.; Weber, Eric J.

    2017-01-01

    Quantitative structure–activity relationships (QSARs) have long been used in the environmental sciences. More recently, molecular modeling and chemoinformatic methods have become widespread. These methods have the potential to expand and accelerate advances in environmental chemistry because they complement observational and experimental data with “in silico” results and analysis. The opportunities and challenges that arise at the intersection between statistical and theoretical in silico methods are most apparent in the context of properties that determine the environmental fate and effects of chemical contaminants (degradation rate constants, partition coefficients, toxicities, etc.). The main example of this is the calibration of QSARs using descriptor variable data calculated from molecular modeling, which can make QSARs more useful for predicting property data that are unavailable, but also can make them more powerful tools for diagnosis of fate determining pathways and mechanisms. Emerging opportunities for “in silico environmental chemical science” are to move beyond the calculation of specific chemical properties using statistical models and toward more fully in silico models, prediction of transformation pathways and products, incorporation of environmental factors into model predictions, integration of databases and predictive models into more comprehensive and efficient tools for exposure assessment, and extending the applicability of all the above from chemicals to biologicals and materials.

  3. In silico environmental chemical science: properties and processes from statistical and computational modelling.

    Science.gov (United States)

    Tratnyek, Paul G; Bylaska, Eric J; Weber, Eric J

    2017-03-22

    Quantitative structure-activity relationships (QSARs) have long been used in the environmental sciences. More recently, molecular modeling and chemoinformatic methods have become widespread. These methods have the potential to expand and accelerate advances in environmental chemistry because they complement observational and experimental data with "in silico" results and analysis. The opportunities and challenges that arise at the intersection between statistical and theoretical in silico methods are most apparent in the context of properties that determine the environmental fate and effects of chemical contaminants (degradation rate constants, partition coefficients, toxicities, etc.). The main example of this is the calibration of QSARs using descriptor variable data calculated from molecular modeling, which can make QSARs more useful for predicting property data that are unavailable, but also can make them more powerful tools for diagnosis of fate determining pathways and mechanisms. Emerging opportunities for "in silico environmental chemical science" are to move beyond the calculation of specific chemical properties using statistical models and toward more fully in silico models, prediction of transformation pathways and products, incorporation of environmental factors into model predictions, integration of databases and predictive models into more comprehensive and efficient tools for exposure assessment, and extending the applicability of all the above from chemicals to biologicals and materials.

  4. Quantum Statistical Entropy of Five-Dimensional Black Hole

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ren; WU Yue-Qin; ZHANG Sheng-Li

    2006-01-01

    The generalized uncertainty relation is introduced to calculate quantum statistic entropy of a black hole.By using the new equation of state density motivated by the generalized uncertainty relation, we discuss entropies of Bose field and Fermi field on the background of the five-dimensional spacetime. In our calculation, we need not introduce cutoff. There is not the divergent logarithmic term as in the original brick-wall method. And it is obtained that the quantum statistic entropy corresponding to black hole horizon is proportional to the area of the horizon. Further it is shown that the entropy of black hole is the entropy of quantum state on the surface of horizon. The black hole's entropy is the intrinsic property of the black hole. The entropy is a quantum effect. It makes people further understand the quantum statistic entropy.

  5. Quantum Statistical Entropy of Five-Dimensional Black Hole

    International Nuclear Information System (INIS)

    Zhao Ren; Zhang Shengli; Wu Yueqin

    2006-01-01

    The generalized uncertainty relation is introduced to calculate quantum statistic entropy of a black hole. By using the new equation of state density motivated by the generalized uncertainty relation, we discuss entropies of Bose field and Fermi field on the background of the five-dimensional spacetime. In our calculation, we need not introduce cutoff. There is not the divergent logarithmic term as in the original brick-wall method. And it is obtained that the quantum statistic entropy corresponding to black hole horizon is proportional to the area of the horizon. Further it is shown that the entropy of black hole is the entropy of quantum state on the surface of horizon. The black hole's entropy is the intrinsic property of the black hole. The entropy is a quantum effect. It makes people further understand the quantum statistic entropy.

  6. A saddle-point for data verification and materials accountancy to control nuclear material

    International Nuclear Information System (INIS)

    Beedgen, R.

    1983-01-01

    Materials accountancy is one of the main elements in international safeguards to determine whether or not nuclear material has been diverted in nuclear plants. The inspector makes independent measurements to verify the plant-operator's data before closing the materials balance with the operator's data. All inspection statements are in principle probability statements because of random errors in measuring the material and verification on a random sampling basis. Statistical test procedures help the inspector to decide under this uncertainty. In this paper a statistical test procedure representing a saddle-point is presented that leads to the highest guaranteed detection probability taking all concealing strategies into account. There are arguments favoring a separate statistical evaluation of data verification and materials accountancy. Following these considerations, a bivariate test procedure is explained that evaluates verification and accountancy separately. (orig.) [de

  7. Statistical properties of the nuclear shell-model Hamiltonian

    International Nuclear Information System (INIS)

    Dias, H.; Hussein, M.S.; Oliveira, N.A. de

    1986-01-01

    The statistical properties of realistic nuclear shell-model Hamiltonian are investigated in sd-shell nuclei. The probability distribution of the basic-vector amplitude is calculated and compared with the Porter-Thomas distribution. Relevance of the results to the calculation of the giant resonance mixing parameter is pointed out. (Author) [pt

  8. In vivo Comet assay – statistical analysis and power calculations of mice testicular cells

    DEFF Research Database (Denmark)

    Hansen, Merete Kjær; Sharma, Anoop Kumar; Dybdahl, Marianne

    2014-01-01

    is to provide curves for this statistic outlining the number of animals and gels to use. The current study was based on 11 compounds administered via oral gavage in three doses to male mice: CAS no. 110-26-9, CAS no. 512-56-1, CAS no. 111873-33-7, CAS no. 79-94-7, CAS no. 115-96-8, CAS no. 598-55-0, CAS no. 636....... A linear mixed-effects model was fitted to the summarized data and the estimated variance components were used to generate power curves as a function of sample size. The statistic that most appropriately summarized the within-sample distributions was the median of the log-transformed data, as it most...... consistently conformed to the assumptions of the statistical model. Power curves for 1.5-, 2-, and 2.5-fold changes of the highest dose group compared to the control group when 50 and 100 cells were scored per gel are provided to aid in the design of future Comet assay studies on testicular cells....

  9. GRUCAL, a computer program for calculating macroscopic group constants

    International Nuclear Information System (INIS)

    Woll, D.

    1975-06-01

    Nuclear reactor calculations require material- and composition-dependent, energy averaged nuclear data to describe the interaction of neutrons with individual isotopes in material compositions of reactor zones. The code GRUCAL calculates these macroscopic group constants for given compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but will be read at the actual execution time from a separate instruction file. This allows to accomodate GRUCAL to various problems or different group constant concepts. (orig.) [de

  10. PROSA: A computer program for statistical analysis of near-real-time-accountancy (NRTA) data

    International Nuclear Information System (INIS)

    Beedgen, R.; Bicking, U.

    1987-04-01

    The computer program PROSA (Program for Statistical Analysis of NRTA Data) is a tool to decide on the basis of statistical considerations if, in a given sequence of materials balance periods, a loss of material might have occurred or not. The evaluation of the material balance data is based on statistical test procedures. In PROSA three truncated sequential tests are applied to a sequence of material balances. The manual describes the statistical background of PROSA and how to use the computer program on an IBM-PC with DOS 3.1. (orig.) [de

  11. Statistics and Data Interpretation for Social Work

    CERN Document Server

    Rosenthal, James

    2011-01-01

    "Without question, this text will be the most authoritative source of information on statistics in the human services. From my point of view, it is a definitive work that combines a rigorous pedagogy with a down to earth (commonsense) exploration of the complex and difficult issues in data analysis (statistics) and interpretation. I welcome its publication.". -Praise for the First Edition. Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes

  12. Microbial Communities Model Parameter Calculation for TSPA/SR

    International Nuclear Information System (INIS)

    D. Jolley

    2001-01-01

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed

  13. Statistical methods in radiation physics

    CERN Document Server

    Turner, James E; Bogard, James S

    2012-01-01

    This statistics textbook, with particular emphasis on radiation protection and dosimetry, deals with statistical solutions to problems inherent in health physics measurements and decision making. The authors begin with a description of our current understanding of the statistical nature of physical processes at the atomic level, including radioactive decay and interactions of radiation with matter. Examples are taken from problems encountered in health physics, and the material is presented such that health physicists and most other nuclear professionals will more readily understand the application of statistical principles in the familiar context of the examples. Problems are presented at the end of each chapter, with solutions to selected problems provided online. In addition, numerous worked examples are included throughout the text.

  14. Establishing the traceability of a uranyl nitrate solution to a standard reference material

    International Nuclear Information System (INIS)

    Jackson, C.H.; Clark, J.P.

    1978-01-01

    A uranyl nitrate solution for use as a Working Calibration and Test Material (WCTM) was characterized, using a statistically designed procedure to document traceability to National Bureau of Standards Reference Material (SPM-960). A Reference Calibration and Test Material (PCTM) was prepared from SRM-960 uranium metal to approximate the acid and uranium concentration of the WCTM. This solution was used in the characterization procedure. Details of preparing, handling, and packaging these solutions are covered. Two outside laboratories, each having measurement expertise using a different analytical method, were selected to measure both solutions according to the procedure for characterizing the WCTM. Two different methods were also used for the in-house characterization work. All analytical results were tested for statistical agreement before the WCTM concentration and limit of error values were calculated. A concentration value was determined with a relative limit of error (RLE) of approximately 0.03% which was better than the target RLE of 0.08%. The use of this working material eliminates the expense of using SRMs to fulfill traceability requirements for uranium measurements on this type material. Several years' supply of uranyl nitrate solution with NBS traceability was produced. The cost of this material was less than 10% of an equal quantity of SRM-960 uranium metal

  15. Applied statistical thermodynamics

    CERN Document Server

    Lucas, Klaus

    1991-01-01

    The book guides the reader from the foundations of statisti- cal thermodynamics including the theory of intermolecular forces to modern computer-aided applications in chemical en- gineering and physical chemistry. The approach is new. The foundations of quantum and statistical mechanics are presen- ted in a simple way and their applications to the prediction of fluid phase behavior of real systems are demonstrated. A particular effort is made to introduce the reader to expli- cit formulations of intermolecular interaction models and to show how these models influence the properties of fluid sy- stems. The established methods of statistical mechanics - computer simulation, perturbation theory, and numerical in- tegration - are discussed in a style appropriate for newcom- ers and are extensively applied. Numerous worked examples illustrate how practical calculations should be carried out.

  16. Health Disparities Calculator (HD*Calc) - SEER Software

    Science.gov (United States)

    Statistical software that generates summary measures to evaluate and monitor health disparities. Users can import SEER data or other population-based health data to calculate 11 disparity measurements.

  17. Final disposal room structural response calculations

    International Nuclear Information System (INIS)

    Stone, C.M.

    1997-08-01

    Finite element calculations have been performed to determine the structural response of waste-filled disposal rooms at the WIPP for a period of 10,000 years after emplacement of the waste. The calculations were performed to generate the porosity surface data for the final set of compliance calculations. The most recent reference data for the stratigraphy, waste characterization, gas generation potential, and nonlinear material response have been brought together for this final set of calculations

  18. Shielding calculation for bremsstrahlung from β-emitters

    International Nuclear Information System (INIS)

    Ichimiya, Tsutomu

    1990-01-01

    Accompanying the revision of radiation injury prevention law, the shielding calculation method for photon corresponding to the dose equivalent was shown. However, regarding the electron from β decay nuclide and bremsstrahlung caused by shielding material, the shielding calculation method corresponding to the 1 cm dose equivalent has not been reported, hence, in this report, the spectrum of β-ray is calculated and the 1 cm dose equivalent transmission rate of the bremsstrahlung was calculated for three kinds of shielding materials (iron, lead, concrete). As the result of consideration, it is sufficient to think about the bremsstrahlung due to negative electron emission accompanying β-decay. In β-decay, electrons which constitute the continuous spectrum with maximum energy are emitted. The shape of the spectrum differs with nuclides. The maximum energy of β-ray of generally used nuclides is mostly below 3MeV and, besides, the electron ray itself is easily shielded, while the strength of bremsstrahlung depends on the atomic number of shielding materials and its generating mechanism is complicated. In this report, the actual shielding calculation method for bremsstrahlung is shown with regard to the most frequently used β-decay nuclides. (M.T.)

  19. Calculation of Airborne Radioactivity Hazard from Machining Volume-Activated Materials

    International Nuclear Information System (INIS)

    E.T. Marshall; S.O. Schwahn

    1997-01-01

    When evaluating a task involving the machining of volume-activated materials, accelerator health physicists must consider more than the surface contamination levels of the equipment and containment of loose shavings, dust or filings. Machining operations such as sawing, routing, welding, and grinding conducted on volume-activated material may pose a significant airborne radioactivity hazard to the worker. This paper presents a computer spreadsheet notebook that conservatively estimates the airborne radioactivity levels generated during machining operations performed on volume-activated materials. By knowing (1) the size and type of materials, (2) the dose rate at a given distances, and (3) limited process knowledge, the Derived Air Concentration (DAC) fraction can be estimated. This tool is flexible, taking into consideration that the process knowledge available for the different materials varies. It addresses the two most common geometries: thick plane and circular cylinder. Once the DAC fraction has been estimated, controls can be implemented to mitigate the hazard to the worker

  20. Calculation of airborne radioactivity hazard from machining volume-activated materials

    International Nuclear Information System (INIS)

    Marshall, E.T.; Schwahn, S.O.

    1996-10-01

    When evaluating a task involving the machining of volume-activated materials, accelerator health physicists must consider more than the surface contamination levels of the equipment and containment of loose shavings, dust or filings. Machining operations such as sawing, routing, welding, and grinding conducted on volume-activated material may pose a significant airborne radioactivity hazard to the worker. This paper presents a computer spreadsheet notebook that conservatively estimates the airborne radioactivity levels generated during machining operations performed on volume-activated materials. By knowing (1) the size and type of materials, (2) the dose rate at a given distances, and (3) limited process knowledge, the Derived Air Concentration (DAC) fraction can be estimated. This tool is flexible, taking into consideration that the process knowledge available for the different materials varies. It addresses the two most common geometries: thick plane and circular cylinder. Once the DAC fraction has been estimated, controls can be implemented to mitigate the hazard to the worker

  1. Predicting Statistical Distributions of Footbridge Vibrations

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2009-01-01

    The paper considers vibration response of footbridges to pedestrian loading. Employing Newmark and Monte Carlo simulation methods, a statistical distribution of bridge vibration levels is calculated modelling walking parameters such as step frequency and stride length as random variables...

  2. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  3. Statistical program for the data evaluation of a thermal ionization mass spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    van Raaphorst, J. G.

    1978-12-15

    A computer program has been written to statistically analyze mass spectrometer measurements. The program tests whether the difference between signal and background intensities is statistically significant, corrects for signal drift in the measured values, and calculates ratios against the main isotope from the corrected intensities. Repeated ratio value measurements are screened for outliers using the Dixon statistical test. Means of ratios and the coefficient of variation are calculated and reported. The computer program is written in Basic and is available for anyone who is interested.

  4. Energy statistics yearbook 2002

    International Nuclear Information System (INIS)

    2005-01-01

    The Energy Statistics Yearbook 2002 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-sixth in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities

  5. Energy statistics yearbook 2001

    International Nuclear Information System (INIS)

    2004-01-01

    The Energy Statistics Yearbook 2001 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-fifth in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities

  6. Energy statistics yearbook 2000

    International Nuclear Information System (INIS)

    2002-01-01

    The Energy Statistics Yearbook 2000 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-third in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities

  7. Patient safety: numerical skills and drug calculation abilities of nursing students and registered nurses.

    Science.gov (United States)

    McMullan, Miriam; Jones, Ray; Lea, Susan

    2010-04-01

    This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.

  8. Assessment of the impact from transporting radioactive materials in the Suez Canal

    International Nuclear Information System (INIS)

    Sabek, G.

    1987-11-01

    A study in Egypt, carried out as the subject of an IAEA research contract, has used the INTERTRAN Code to provide an assessment of doses to handlers and the collective dose to the population, due to transport of radioactive material through the Suez Canal. Calculations were carried out using the data appropriate to the Canal, based on actual statistics and observations and default data built into the Code. The average collective dose per year was calculated to be 4.5 man rem and doses to handlers under normal transport conditions represented 97% of the total. Use of built-in default data gave results 10 6 times higher. 11 refs, 16 tabs

  9. Statistical methods for accurately determining criticality code bias

    International Nuclear Information System (INIS)

    Trumble, E.F.; Kimball, K.D.

    1997-01-01

    A system of statistically treating validation calculations for the purpose of determining computer code bias is provided in this paper. The following statistical treatments are described: weighted regression analysis, lower tolerance limit, lower tolerance band, and lower confidence band. These methods meet the criticality code validation requirements of ANS 8.1. 8 refs., 5 figs., 4 tabs

  10. Modelling of buffer material behaviour

    International Nuclear Information System (INIS)

    Boergesson, L.

    1988-12-01

    Some material models of smectite rich buffer material suited for nuclear waste isolation are accounted for in the report. The application of these models in finite element calculations of some scenarios and performance are also shown. The rock shear scenario has been closely studied with comparisons between calculated and measured results. Sensitivity analyses of the effect of changing the density of the clay and the rate of shear have been performed as well as one calculation using a hollow steel cylinder. Material models and finite element calculations of canister settlement, thermomechanical effects and swelling are also accounted for. The report shows the present state of the work to establish material models and calculation tools which can be used at the final design of the repository. (31 illustrations)

  11. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    International Nuclear Information System (INIS)

    Zhu, Jian-Zhou; Hammett, Gregory W.

    2011-01-01

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence (T.-D. Lee, 'On some statistical properties of hydrodynamical and magnetohydrodynamical fields,' Q. Appl. Math. 10, 69 (1952)) is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.

  12. Reporting and analyzing statistical uncertainties in Monte Carlo-based treatment planning

    International Nuclear Information System (INIS)

    Chetty, Indrin J.; Rosu, Mihaela; Kessler, Marc L.; Fraass, Benedick A.; Haken, Randall K. ten; Kong, Feng-Ming; McShan, Daniel L.

    2006-01-01

    Purpose: To investigate methods of reporting and analyzing statistical uncertainties in doses to targets and normal tissues in Monte Carlo (MC)-based treatment planning. Methods and Materials: Methods for quantifying statistical uncertainties in dose, such as uncertainty specification to specific dose points, or to volume-based regions, were analyzed in MC-based treatment planning for 5 lung cancer patients. The effect of statistical uncertainties on target and normal tissue dose indices was evaluated. The concept of uncertainty volume histograms for targets and organs at risk was examined, along with its utility, in conjunction with dose volume histograms, in assessing the acceptability of the statistical precision in dose distributions. The uncertainty evaluation tools were extended to four-dimensional planning for application on multiple instances of the patient geometry. All calculations were performed using the Dose Planning Method MC code. Results: For targets, generalized equivalent uniform doses and mean target doses converged at 150 million simulated histories, corresponding to relative uncertainties of less than 2% in the mean target doses. For the normal lung tissue (a volume-effect organ), mean lung dose and normal tissue complication probability converged at 150 million histories despite the large range in the relative organ uncertainty volume histograms. For 'serial' normal tissues such as the spinal cord, large fluctuations exist in point dose relative uncertainties. Conclusions: The tools presented here provide useful means for evaluating statistical precision in MC-based dose distributions. Tradeoffs between uncertainties in doses to targets, volume-effect organs, and 'serial' normal tissues must be considered carefully in determining acceptable levels of statistical precision in MC-computed dose distributions

  13. Experimental statistics for biological sciences.

    Science.gov (United States)

    Bang, Heejung; Davidian, Marie

    2010-01-01

    In this chapter, we cover basic and fundamental principles and methods in statistics - from "What are Data and Statistics?" to "ANOVA and linear regression," which are the basis of any statistical thinking and undertaking. Readers can easily find the selected topics in most introductory statistics textbooks, but we have tried to assemble and structure them in a succinct and reader-friendly manner in a stand-alone chapter. This text has long been used in real classroom settings for both undergraduate and graduate students who do or do not major in statistical sciences. We hope that from this chapter, readers would understand the key statistical concepts and terminologies, how to design a study (experimental or observational), how to analyze the data (e.g., describe the data and/or estimate the parameter(s) and make inference), and how to interpret the results. This text would be most useful if it is used as a supplemental material, while the readers take their own statistical courses or it would serve as a great reference text associated with a manual for any statistical software as a self-teaching guide.

  14. Visuanimation in statistics

    KAUST Repository

    Genton, Marc G.

    2015-04-14

    This paper explores the use of visualization through animations, coined visuanimation, in the field of statistics. In particular, it illustrates the embedding of animations in the paper itself and the storage of larger movies in the online supplemental material. We present results from statistics research projects using a variety of visuanimations, ranging from exploratory data analysis of image data sets to spatio-temporal extreme event modelling; these include a multiscale analysis of classification methods, the study of the effects of a simulated explosive volcanic eruption and an emulation of climate model output. This paper serves as an illustration of visuanimation for future publications in Stat. Copyright © 2015 John Wiley & Sons, Ltd.

  15. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    Science.gov (United States)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods

  16. Thermodynamics, Gibbs Method and Statistical Physics of Electron Gases Gibbs Method and Statistical Physics of Electron Gases

    CERN Document Server

    Askerov, Bahram M

    2010-01-01

    This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.

  17. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  18. CONTAIN calculations; CONTAIN-Rechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Scholtyssek, W.

    1995-08-01

    In the first phase of a benchmark comparison, the CONTAIN code was used to calculate an assumed EPR accident `medium-sized leak in the cold leg`, especially for the first two days after initiation of the accident. The results for global characteristics compare well with those of FIPLOC, MELCOR and WAVCO calculations, if the same materials data are used as input. However, significant differences show up for local quantities such as flows through leakages. (orig.)

  19. Statistical analysis of x-ray stress measurement by centroid method

    International Nuclear Information System (INIS)

    Kurita, Masanori; Amano, Jun; Sakamoto, Isao

    1982-01-01

    The X-ray technique allows a nondestructive and rapid measurement of residual stresses in metallic materials. The centroid method has an advantage over other X-ray methods in that it can determine the angular position of a diffraction line, from which the stress is calculated, even with an asymmetrical line profile. An equation for the standard deviation of the angular position of a diffraction line, σsub(p), caused by statistical fluctuation was derived, which is a fundamental source of scatter in X-ray stress measurements. This equation shows that an increase of X-ray counts by a factor of k results in a decrease of σsub(p) by a factor of 1/√k. It also shows that σsub(p) increases rapidly as the angular range used in calculating the centroid increases. It is therefore important to calculate the centroid using the narrow angular range between the two ends of the diffraction line where it starts to deviate from the straight background line. By using quenched structural steels JIS S35C and S45C, the residual stresses and their standard deviations were calculated by the centroid, parabola, Gaussian curve, and half-width methods, and the results were compared. The centroid of a diffraction line was affected greatly by the background line used. The standard deviation of the stress measured by the centroid method was found to be the largest among the four methods. (author)

  20. Quantum statistical entropy corresponding to cosmic horizon in five-dimensional spacetime

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The generalized uncertainty relation is introduced to calculate the quantum statis-tical entropy corresponding to cosmic horizon. By using the new equation of state density motivated by the generalized uncertainty relation, we discuss entropies of Bose field and Fermi field on the background of five-dimensional spacetime. In our calculation, we need not introduce cutoff. There is no divergent logarithmic term in the original brick-wall method. And it is obtained that the quantum statistical en-tropy corresponding to cosmic horizon is proportional to the area of the horizon. Further it is shown that the entropy corresponding to cosmic horizon is the entropy of quantum state on the surface of horizon. The black hole’s entropy is the intrinsic property of the black hole. The entropy is a quantum effect. In our calculation, by using the quantum statistical method, we obtain the partition function of Bose field and Fermi field on the background of five-dimensional spacetime. We provide a way to study the quantum statistical entropy corresponding to cosmic horizon in the higher-dimensional spacetime.

  1. Field calculations. Part I: Choice of variables and methods

    International Nuclear Information System (INIS)

    Turner, L.R.

    1981-01-01

    Magnetostatic calculations can involve (in order of increasing complexity) conductors only, material with constant or infinite permeability, or material with variable permeability. We consider here only the most general case, calculations involving ferritic material with variable permeability. Variables suitable for magnetostatic calculations are the magnetic field, the magnetic vector potential, and the magnetic scalar potential. For two-dimensional calculations the potentials, which each have only one component, have advantages over the field, which has two components. Because it is a single-valued variable, the vector potential is perhaps the best variable for two-dimensional calculations. In three dimensions, both the field and the vector potential have three components; the scalar potential, with only one component,provides a much smaller system of equations to be solved. However the scalar potential is not single-valued. To circumvent this problem, a calculation with two scalar potentials can be performed. The scalar potential whose source is the conductors can be calculated directly by the Biot-Savart law, and the scalar potential whose source is the magnetized material is single valued. However in some situations, the fields from the two potentials nearly cancel; and the numerical accuracy is lost. The 3-D magnetostatic program TOSCA employs a single total scalar potential; the program GFUN uses the magnetic field as its variable

  2. Reaction Cross Section Calculations in Neutron Induced Reactions and GEANT4 Simulation of Hadronic Interactions for the Reactor Moderator Material BeO

    Directory of Open Access Journals (Sweden)

    Veli ÇAPALI

    2016-05-01

    Full Text Available BeO is one of the most common moderator material for neutron moderation; due to its high density, neutron capture cross section and physical-chemical properties that provides usage at elevated temperatures. As it’s known, for various applications in the field of reactor design and neutron capture, reaction cross–section data are required. The cross–sections of (n,α, (n,2n, (n,t, (n,EL and (n,TOT reactions for 9Be and 16O nuclei have been calculated by using TALYS 1.6 Two Component Exciton model and EMPIRE 3.2 Exciton model in this study. Hadronic interactions of low energetic neutrons and generated isotopes–particles have been investigated for a situation in which BeO was used as a neutron moderator by using GEANT4, which is a powerful simulation software. In addition, energy deposition along BeO material has been obtained. Results from performed calculations were compared with the experimental nuclear reaction data exist in EXFOR.

  3. Closure and Sealing Design Calculation

    International Nuclear Information System (INIS)

    T. Lahnalampi; J. Case

    2005-01-01

    The purpose of the ''Closure and Sealing Design Calculation'' is to illustrate closure and sealing methods for sealing shafts, ramps, and identify boreholes that require sealing in order to limit the potential of water infiltration. In addition, this calculation will provide a description of the magma that can reduce the consequences of an igneous event intersecting the repository. This calculation will also include a listing of the project requirements related to closure and sealing. The scope of this calculation is to: summarize applicable project requirements and codes relating to backfilling nonemplacement openings, removal of uncommitted materials from the subsurface, installation of drip shields, and erecting monuments; compile an inventory of boreholes that are found in the area of the subsurface repository; describe the magma bulkhead feature and location; and include figures for the proposed shaft and ramp seals. The objective of this calculation is to: categorize the boreholes for sealing by depth and proximity to the subsurface repository; develop drawing figures which show the location and geometry for the magma bulkhead; include the shaft seal figures and a proposed construction sequence; and include the ramp seal figure and a proposed construction sequence. The intent of this closure and sealing calculation is to support the License Application by providing a description of the closure and sealing methods for the Safety Analysis Report. The closure and sealing calculation will also provide input for Post Closure Activities by describing the location of the magma bulkhead. This calculation is limited to describing the final configuration of the sealing and backfill systems for the underground area. The methods and procedures used to place the backfill and remove uncommitted materials (such as concrete) from the repository and detailed design of the magma bulkhead will be the subject of separate analyses or calculations. Post-closure monitoring will not

  4. Discrimination of source reactor type by multivariate statistical analysis of uranium and plutonium isotopic concentrations in unknown irradiated nuclear fuel material.

    Science.gov (United States)

    Robel, Martin; Kristo, Michael J

    2008-11-01

    The problem of identifying the provenance of unknown nuclear material in the environment by multivariate statistical analysis of its uranium and/or plutonium isotopic composition is considered. Such material can be introduced into the environment as a result of nuclear accidents, inadvertent processing losses, illegal dumping of waste, or deliberate trafficking in nuclear materials. Various combinations of reactor type and fuel composition were analyzed using Principal Components Analysis (PCA) and Partial Least Squares Discriminant Analysis (PLSDA) of the concentrations of nine U and Pu isotopes in fuel as a function of burnup. Real-world variation in the concentrations of (234)U and (236)U in the fresh (unirradiated) fuel was incorporated. The U and Pu were also analyzed separately, with results that suggest that, even after reprocessing or environmental fractionation, Pu isotopes can be used to determine both the source reactor type and the initial fuel composition with good discrimination.

  5. Verification of 3-D generation code package for neutronic calculations of WWERs

    International Nuclear Information System (INIS)

    Sidorenko, V.D.; Aleshin, S.S.; Bolobov, P.A.; Bolshagin, S.N.; Lazarenko, A.P.; Markov, A.V.; Morozov, V.V.; Syslov, A.A.; Tsvetkov, V.M.

    2000-01-01

    Materials on verification of the 3 -d generation code package for WWERs neutronic calculations are presented. The package includes: - spectral code TVS-M; - 2-D fine mesh diffusion code PERMAK-A for 4- or 6-group calculation of WWER core burnup; - 3-D coarse mesh diffusion code BIPR-7A for 2-group calculations of quasi-stationary WWERs regimes. The materials include both TVS-M verification data and verification data on PERMAK-A and BIPR-7A codes using constant libraries generated with TVS-M. All materials are related to the fuel without Gd. TVS-M verification materials include results of comparison both with benchmark calculations obtained by other codes and with experiments carried out at ZR-6 critical facility. PERMAK-A verification materials contain results of comparison with TVS-M calculations and with ZR-6 experiments. BIPR-7A materials include comparison with operation data for Dukovany-2 and Loviisa-1 NPPs (WWER-440) and for Balakovo NPP Unit 4 (WWER-1000). The verification materials demonstrate rather good accuracy of calculations obtained with the use of code package of the 3 -d generation. (Authors)

  6. Review of theoretical calculations of hydrogen storage in carbon-based materials

    Energy Technology Data Exchange (ETDEWEB)

    Meregalli, V.; Parrinello, M. [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany)

    2001-02-01

    In this paper we review the existing theoretical literature on hydrogen storage in single-walled nanotubes and carbon nanofibers. The reported calculations indicate a hydrogen uptake smaller than some of the more optimistic experimental results. Furthermore the calculations suggest that a variety of complex chemical processes could accompany hydrogen storage and release. (orig.)

  7. Using Microsoft Excel to Generate Usage Statistics

    Science.gov (United States)

    Spellman, Rosemary

    2011-01-01

    At the Libraries Service Center, statistics are generated on a monthly, quarterly, and yearly basis by using four Microsoft Excel workbooks. These statistics provide information about what materials are being requested and by whom. They also give details about why certain requests may not have been filled. Utilizing Excel allows for a shallower…

  8. Summary of workshop 'Theory Meets Industry'—the impact of ab initio solid state calculations on industrial materials research

    Science.gov (United States)

    Wimmer, E.

    2008-02-01

    A workshop, 'Theory Meets Industry', was held on 12-14 June 2007 in Vienna, Austria, attended by a well balanced number of academic and industrial scientists from America, Europe, and Japan. The focus was on advances in ab initio solid state calculations and their practical use in industry. The theoretical papers addressed three dominant themes, namely (i) more accurate total energies and electronic excitations, (ii) more complex systems, and (iii) more diverse and accurate materials properties. Hybrid functionals give some improvements in energies, but encounter difficulties for metallic systems. Quantum Monte Carlo methods are progressing, but no clear breakthrough is on the horizon. Progress in order-N methods is steady, as is the case for efficient methods for exploring complex energy hypersurfaces and large numbers of structural configurations. The industrial applications were dominated by materials issues in energy conversion systems, the quest for hydrogen storage materials, improvements of electronic and optical properties of microelectronic and display materials, and the simulation of reactions on heterogeneous catalysts. The workshop is a clear testimony that ab initio computations have become an industrial practice with increasingly recognized impact.

  9. Antibacterial Properties of Calcium Fluoride-Based Composite Materials: In Vitro Study

    Science.gov (United States)

    Zarzycka, Beata; Grzegorczyk, Janina; Sokołowski, Krzysztof; Półtorak, Konrad; Sokołowski, Jerzy

    2016-01-01

    The aim of the study was to evaluate antibacterial activity of composite materials modified with calcium fluoride against cariogenic bacteria S. mutans and L. acidophilus. One commercially available conventional light-curing composite material containing fluoride ions (F2) and two commercially available flowable light-curing composite materials (Flow Art and X-Flow) modified with 1.5, 2.5, and 5.0 wt% anhydrous calcium fluoride addition were used in the study. Composite material samples were incubated in 0.95% NaCl at 35°C for 3 days; then dilution series of S. mutans and L. acidophilus strains were made from the eluates. Bacteria dilutions were cultivated on media afterwards. Colony-forming unit per 1 mL of solution (CFU/mL) was calculated. Composite materials modified with calcium fluoride highly reduced (p composite materials containing fluoride compounds. The greatest reduction in bacteria growth was observed for composite materials modified with 1.5% wt. CaF2. All three tested composite materials showed statistically greater antibacterial activity against L. acidophilus than against S. mutans. PMID:28053976

  10. Differential and integral characteristics of prompt fission neutrons in the statistical theory

    International Nuclear Information System (INIS)

    Gerasimenko, B.F.; Rubchenya, V.A.

    1989-01-01

    Hauser-Feshbach statistical theory is the most consistent approach to the calculation of both spectra and prompt fission neutrons characteristics. On the basis of this approach a statistical model for calculation of differential prompt fission neutrons characteristics of low energy fission has been proposed and improved in order to take into account the anisotropy effects arising at prompt fission neutrons emission from fragments. 37 refs, 6 figs

  11. RADSHI: shielding calculation program for different geometries sources

    International Nuclear Information System (INIS)

    Gelen, A.; Alvarez, I.; Lopez, H.; Manso, M.

    1996-01-01

    A computer code written in pascal language for IBM/Pc is described. The program calculates the optimum thickness of slab shield for different geometries sources. The Point Kernel Method is employed, which enables the obtention of the ionizing radiation flux density. The calculation takes into account the possibility of self-absorption in the source. The air kerma rate for gamma radiation is determined, and with the concept of attenuation length through the equivalent attenuation length the shield is obtained. The scattering and the exponential attenuation inside the shield material is considered in the program. The shield materials can be: concrete, water, iron or lead. It also calculates the shield for point isotropic neutron source, using as shield materials paraffin, concrete or water. (authors). 13 refs

  12. Criticality calculation of the nuclear material warehouse of the ININ

    International Nuclear Information System (INIS)

    Garcia, T.; Angeles, A.; Flores C, J.

    2013-10-01

    In this work the conditions of nuclear safety were determined as much in normal conditions as in the accident event of the nuclear fuel warehouse of the reactor TRIGA Mark III of the Instituto Nacional de Investigaciones Nucleares (ININ). The warehouse contains standard fuel elements Leu - 8.5/20, a control rod with follower of standard fuel type Leu - 8.5/20, fuel elements Leu - 30/20, and the reactor fuel Sur-100. To check the subcritical state of the warehouse the effective multiplication factor (keff) was calculated. The keff calculation was carried out with the code MCNPX. (Author)

  13. Evaluation of covariance in theoretical calculation of nuclear data

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki

    1981-01-01

    Covariances of the cross sections are discussed on the statistical model calculations. Two categories of covariance are discussed: One is caused by the model approximation and the other by the errors in the model parameters. As an example, the covariances are calculated for 100 Ru. (author)

  14. Earthquake statistics inferred from plastic events in soft-glassy materials

    NARCIS (Netherlands)

    Benzi, Roberto; Toschi, Federico; Trampert, Jeannot

    2016-01-01

    We propose a new approach for generating synthetic earthquake catalogues based on the physics of soft glasses. The continuum approach produces yield-stress materials based on Lattice-Boltzmann simulations. We show that, if the material is stimulated below yield stress, plastic events occur, which

  15. A kinematic measurement for ductile and brittle failure of materials using digital image correlation

    Directory of Open Access Journals (Sweden)

    M.M. Reza Mousavi

    2016-12-01

    Full Text Available This paper addresses some material level test which is done on quasi-brittle and ductile materials in the laboratory. The displacement control experimental program is composed of mortar cylinders under uniaxial compression shows quasi-brittle behavior and seemingly round-section aluminum specimens under uniaxial tension represents ductile behavior. Digital Image Correlation gives full field measurement of deformation in both aluminum and mortar specimens. Likewise, calculating the relative displacement of two points located on top and bottom of virtual LVDT, which is virtually placed on the surface of the specimen, gives us the classical measure of strain. However, the deformation distribution is not uniform all over the domain of specimens mainly due to imperfect nature of experiments and measurement devices. Displacement jumps in the fracture zone of mortar specimens and strain localization in the necking area for the aluminum specimen, which are reflecting different deformation values and deformation gradients, is compared to the other regions. Since the results are inherently scattered, it is usually non-trivial to smear out the stress of material as a function of a single strain value. To overcome this uncertainty, statistical analysis could bring a meaningful way to closely look at scattered results. A large number of virtual LVDTs are placed on the surface of specimens in order to collect statistical parameters of deformation and strain. Values of mean strain, standard deviation and coeffcient of variations for each material are calculated and correlated with the failure type of the corresponding material (either brittle or ductile. The main limiters for standard deviation and coeffcient of variations for brittle and ductile failure, in pre-peak and post-peak behavior are established and presented in this paper. These limiters help us determine whether failure is brittle or ductile without determining of stress level in the material.

  16. Optimized 3-D electromagnetic models of composite materials in microwave frequency range: application to EMC characterization of complex media by statistical means

    Directory of Open Access Journals (Sweden)

    S. Lalléchère

    2017-05-01

    Full Text Available The aim of this proposal is to demonstrate the ability of tridimensional (3-D electromagnetic modeling tool for the characterization of composite materials in microwave frequency band range. Indeed, an automated procedure is proposed to generate random materials, proceed to 3-D simulations, and compute shielding effectiveness (SE statistics with finite integration technique. In this context, 3-D electromagnetic models rely on random locations of conductive inclusions; results are compared with classical electromagnetic mixing theory (EMT approaches (e.g. Maxwell-Garnett formalism, and dynamic homogenization model (DHM. The article aims to demonstrate the interest of the proposed approach in various domains such as propagation and electromagnetic compatibility (EMC.

  17. Approach to IAEA material-balance verification at the Portsmouth Gas Centrifuge Enrichment Plant

    International Nuclear Information System (INIS)

    Gordon, D.M.; Sanborn, J.B.; Younkin, J.M.; DeVito, V.J.

    1983-01-01

    This paper describes a potential approach by which the International Atomic Energy Agency (IAEA) might verify the nuclear-material balance at the Portsmouth Gas Centrifuge Enrichment Plant (GCEP). The strategy makes use of the attributes and variables measurement verification approach, whereby the IAEA would perform independent measurements on a randomly selected subset of the items comprising the U-235 flows and inventories at the plant. In addition, the MUF-D statistic is used as the test statistic for the detection of diversion. The paper includes descriptions of the potential verification activities, as well as calculations of: (1) attributes and variables sample sizes for the various strata, (2) standard deviations of the relevant test statistics, and (3) the detection sensitivity which the IAEA might achieve by this verification strategy at GCEP

  18. Statistical Thermodynamics and Microscale Thermophysics

    Science.gov (United States)

    Carey, Van P.

    1999-08-01

    Many exciting new developments in microscale engineering are based on the application of traditional principles of statistical thermodynamics. In this text Van Carey offers a modern view of thermodynamics, interweaving classical and statistical thermodynamic principles and applying them to current engineering systems. He begins with coverage of microscale energy storage mechanisms from a quantum mechanics perspective and then develops the fundamental elements of classical and statistical thermodynamics. Subsequent chapters discuss applications of equilibrium statistical thermodynamics to solid, liquid, and gas phase systems. The remainder of the book is devoted to nonequilibrium thermodynamics of transport phenomena and to nonequilibrium effects and noncontinuum behavior at the microscale. Although the text emphasizes mathematical development, Carey includes many examples and exercises to illustrate how the theoretical concepts are applied to systems of scientific and engineering interest. In the process he offers a fresh view of statistical thermodynamics for advanced undergraduate and graduate students, as well as practitioners, in mechanical, chemical, and materials engineering.

  19. Procedure for statistical analysis of one-parameter discrepant experimental data

    International Nuclear Information System (INIS)

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  20. Calculations of Excitation Functions of Some Structural Fusion Materials for ( n, t) Reactions up to 50 MeV Energy

    Science.gov (United States)

    Tel, E.; Durgu, C.; Aktı, N. N.; Okuducu, Ş.

    2010-06-01

    Fusion serves an inexhaustible energy for humankind. Although there have been significant research and development studies on the inertial and magnetic fusion reactor technology, there is still a long way to go to penetrate commercial fusion reactors to the energy market. Tritium self-sufficiency must be maintained for a commercial power plant. For self-sustaining (D-T) fusion driver tritium breeding ratio should be greater than 1.05. So, the working out the systematics of ( n, t) reaction cross sections is of great importance for the definition of the excitation function character for the given reaction taking place on various nuclei at different energies. In this study, ( n, t) reactions for some structural fusion materials such as 27Al, 51V, 52Cr, 55Mn, and 56Fe have been investigated. The new calculations on the excitation functions of 27Al( n, t)25Mg, 51V( n, t)49Ti, 52Cr( n, t)50V, 55Mn( n, t)53Cr and 56Fe( n, t)54Mn reactions have been carried out up to 50 MeV incident neutron energy. In these calculations, the pre-equilibrium and equilibrium effects have been investigated. The pre-equilibrium calculations involve the new evaluated the geometry dependent hybrid model, hybrid model and the cascade exciton model. Equilibrium effects are calculated according to the Weisskopf-Ewing model. Also in the present work, we have calculated ( n, t) reaction cross-sections by using new evaluated semi-empirical formulas developed by Tel et al. at 14-15 MeV energy. The calculated results are discussed and compared with the experimental data taken from the literature.

  1. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  2. Summary statistics for end-point conditioned continuous-time Markov chains

    DEFF Research Database (Denmark)

    Hobolth, Asger; Jensen, Jens Ledet

    Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....

  3. STARS: An ArcGIS Toolset Used to Calculate the Spatial Information Needed to Fit Spatial Statistical Models to Stream Network Data

    Directory of Open Access Journals (Sweden)

    Erin Peterson

    2014-01-01

    Full Text Available This paper describes the STARS ArcGIS geoprocessing toolset, which is used to calcu- late the spatial information needed to fit spatial statistical models to stream network data using the SSN package. The STARS toolset is designed for use with a landscape network (LSN, which is a topological data model produced by the FLoWS ArcGIS geoprocessing toolset. An overview of the FLoWS LSN structure and a few particularly useful tools is also provided so that users will have a clear understanding of the underlying data struc- ture that the STARS toolset depends on. This document may be used as an introduction to new users. The methods used to calculate the spatial information and format the final .ssn object are also explicitly described so that users may create their own .ssn object using other data models and software.

  4. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  5. Tools for Assessing Readability of Statistics Teaching Materials

    Science.gov (United States)

    Lesser, Lawrence; Wagler, Amy

    2016-01-01

    This article provides tools and rationale for instructors in math and science to make their assessment and curriculum materials (more) readable for students. The tools discussed (MSWord, LexTutor, Coh-Metrix TEA) are readily available linguistic analysis applications that are grounded in current linguistic theory, but present output that can…

  6. Microscopic calculation of the form factors for deeply inelastic heavy-ion collisions within the statistical model

    International Nuclear Information System (INIS)

    Barrett, B.R.; Shlomo, S.; Weidenmueller, H.A.

    1978-01-01

    Agassi, Ko, and Weidenmueller have recently developed a transport theory of deeply inelastic heavy-ion collisions based on a random-matrix model. In this work it was assumed that the reduced form factors, which couple the relative motion with the intrinsic excitation of either fragment, represent a Gaussian stochastic process with zero mean and a second moment characterized by a few parameters. In the present paper, we give a justification of the statistical assumptions of Agassi, Ko, and Weidenmueller and of the form of the second moment assumed in their work, and calculate the input parameters of their model for two cases: 40 Ar on 208 Pb and 40 Ar on 120 Sn. We find values for the strength, correlation length, and angular momentum dependence of the second moment, which are consistent with those estimated by Agassi, Ko, and Weidenmueller. We consider only inelastic excitations (no nucleon transfer) caused by the penetration of the single-particle potential well of the light ion into the mass distribution of the heavy one. This is combined with a random-matrix model for the high-lying excited states of the heavy ion. As a result we find formulas which relate simply to those of Agassi, Ko, and Weidenmueller, and which can be evaluated numerically, yielding the results mentioned above. Our results also indicate for which distances of closest approach the Agassi-Ko-Weidenmueller theory breaks down

  7. Probability and statistics in particle physics

    International Nuclear Information System (INIS)

    Frodesen, A.G.; Skjeggestad, O.

    1979-01-01

    Probability theory is entered into at an elementary level and given a simple and detailed exposition. The material on statistics has been organised with an eye to the experimental physicist's practical need, which is likely to be statistical methods for estimation or decision-making. The book is intended for graduate students and research workers in experimental high energy and elementary particle physics, and numerous examples from these fields are presented. (JIW)

  8. Lecture notes on quantum statistics

    NARCIS (Netherlands)

    Gill, R.D.

    2000-01-01

    These notes are meant to form the material for an introductory course on quantum statistics at the graduate level aimed at mathematical statisticians and probabilists No background in physics quantum or otherwise is required They are still far from complete

  9. Quantum field theory and statistical mechanics

    International Nuclear Information System (INIS)

    Jegerlehner, F.

    1975-01-01

    At first a heuristic understanding is given how the relation between quantum field theory and statistical mechanics near phase transitions comes about. A long range scale invariant theory is constructed, critical indices are calculated and the relations among them are proved, field theoretical Kadanoff-scale transformations are formulated and scaling corrections calculated. A precise meaning to many of Kadanoffs considerations and a model matching Wegners phenomenological scheme is given. It is shown, that soft parametrization is most transparent for the discussion of scaling behaviour. (BJ) [de

  10. Calculation methods in program CCRMN

    Energy Technology Data Exchange (ETDEWEB)

    Chonghai, Cai [Nankai Univ., Tianjin (China). Dept. of Physics; Qingbiao, Shen [Chinese Nuclear Data Center, Beijing, BJ (China)

    1996-06-01

    CCRMN is a program for calculating complex reactions of a medium-heavy nucleus with six light particles. In CCRMN, the incoming particles can be neutrons, protons, {sup 4}He, deuterons, tritons and {sup 3}He. the CCRMN code is constructed within the framework of the optical model, pre-equilibrium statistical theory based on the exciton model and the evaporation model. CCRMN is valid in 1{approx} MeV energy region, it can give correct results for optical model quantities and all kinds of reaction cross sections. This program has been applied in practical calculations and got reasonable results.

  11. Sink efficiency calculation of dislocations in irradiated materials by phase-field modelling

    International Nuclear Information System (INIS)

    Rouchette, Adrien

    2015-01-01

    The aim of this work is to develop a modelling technique for diffusion of crystallographic migrating defects in irradiated metals and absorption by sinks to better predict the microstructural evolution in those materials.The phase field technique is well suited for this problem, since it naturally takes into account the elastic effects of dislocations on point defect diffusion in the most complex cases. The phase field model presented in this work has been adapted to simulate the generation of defects by irradiation and their absorption by the dislocation cores by means of a new order parameter associated to the sink morphology. The method has first been validated in different reference cases by comparing the sink strengths obtained numerically with analytical solutions available in the literature. Then, the method has been applied to dislocations with different orientations in zirconium, taking into account the anisotropic properties of the crystal and point defects, obtained by state-of-the-art atomic calculations.The results show that the shape anisotropy of the point defects promotes the vacancy absorption by basal loops, which is consistent with the experimentally observed zirconium growth under irradiation. Finally, the rigorous investigation of the dislocation loop case proves that phase field simulations give more accurate results than analytical solutions in realistic loop density ranges. (author)

  12. Statistical methods for physical science

    CERN Document Server

    Stanford, John L

    1994-01-01

    This volume of Methods of Experimental Physics provides an extensive introduction to probability and statistics in many areas of the physical sciences, with an emphasis on the emerging area of spatial statistics. The scope of topics covered is wide-ranging-the text discusses a variety of the most commonly used classical methods and addresses newer methods that are applicable or potentially important. The chapter authors motivate readers with their insightful discussions, augmenting their material withKey Features* Examines basic probability, including coverage of standard distributions, time s

  13. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  14. Quality of reporting statistics in two Indian pharmacology journals

    OpenAIRE

    Jaykaran,; Yadav, Preeti

    2011-01-01

    Objective: To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. Materials and Methods: All original articles published since 2002 were downloaded from the journals′ (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of...

  15. MARS14 deep-penetration calculation for the ISIS target station shielding

    International Nuclear Information System (INIS)

    Nakao, Noriaki; Nunomiya, Tomoya; Iwase, Hiroshi; Nakamura, Takashi

    2004-01-01

    The calculation of neutron penetration through a thick shield was performed with a three-dimensional multi-layer technique using the MARS14(02) Monte Carlo code to compare with the experimental shielding data in 1998 at the ISIS spallation neutron source facility of Rutherford Appleton Laboratory. In this calculation, secondary particles from a tantalum target bombarded by 800-MeV protons were transmitted through a bulk shield of approximately 3-m-thick iron and 1-m-thick concrete. To accomplish this deep-penetration calculation, a three-dimensional multi-layer technique and energy cut-off method were used considering a spatial statistical balance. Finally, the energy spectra of neutrons behind the very thick shield could be calculated down to the thermal energy with good statistics, and the calculated results typically agree well within a factor of two with the experimental data over a broad energy range. The 12 C(n,2n) 11 C reaction rates behind the bulk shield were also calculated, which agree with the experimental data typically within 60%. These results are quite impressive in calculation accuracy for deep-penetration problem

  16. Notes on the MUF-D statistic

    International Nuclear Information System (INIS)

    Picard, R.R.

    1987-01-01

    Verification of an inventory or of a reported material unaccounted for (MUF) calls for the remeasurement of a sample of items by an inspector followed by comparison of the inspector's data to the facility's reported values. Such comparison is intended to protect against falsification of accounting data that could conceal material loss. In the international arena, the observed discrepancies between the inspector's data and the reported data are quantified using the D statistic. If data have been falsified by the facility, the standard deviations of the D and MUF-D statistics are inflated owing to the sampling distribution. Moreover, under certain conditions the distributions of those statistics can depart markedly from normality, complicating evaluation of an inspection plan's performance. Detection probabilities estimated using standard deviations appropriate for the no-falsification case in conjunction with assumed normality can be far too optimistic. Under very general conditions regarding the facility's and/or the inspector's measurement error procedures and the inspector's sampling regime, the variance of the MUF-D statistic can be broken into three components. The inspection's sensitivity against various falsification scenarios can be traced to one or more of these components. Obvious implications exist for the planning of effective inspections, particularly in the area of resource optimization

  17. Computer modelling of statistical properties of SASE FEL radiation

    International Nuclear Information System (INIS)

    Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    1997-01-01

    The paper describes an approach to computer modelling of statistical properties of the radiation from self amplified spontaneous emission free electron laser (SASE FEL). The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY

  18. Statistical characteristics of trajectories of diamagnetic unicellular organisms in a magnetic field.

    Science.gov (United States)

    Gorobets, Yu I; Gorobets, O Yu

    2015-01-01

    The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Resonance self-shielding calculation with regularized random ladders

    Energy Technology Data Exchange (ETDEWEB)

    Ribon, P.

    1986-01-01

    The straightforward method for calculation of resonance self-shielding is to generate one or several resonance ladders, and to process them as resolved resonances. The main drawback of Monte Carlo methods used to generate the ladders, is the difficulty of reducing the dispersion of data and results. Several methods are examined, and it is shown how one (a regularized sampling method) improves the accuracy. Analytical methods to compute the effective cross-section have recently appeared: they are basically exempt from dispersion, but are inevitably approximate. The accuracy of the most sophisticated one is checked. There is a neutron energy range which is improperly considered as statistical. An examination is presented of what happens when it is treated as statistical, and how it is possible to improve the accuracy of calculations in this range. To illustrate the results calculations have been performed in a simple case: nucleus /sup 238/U, at 300 K, between 4250 and 4750 eV.

  20. Proposal to Include Electrical Energy in the Industrial Return Statistics

    CERN Document Server

    2003-01-01

    At its 108th session on the 20 June 1997, the Council approved the Report of the Finance Committee Working Group on the Review of CERN Purchasing Policy and Procedures. Among other topics, the report recommended the inclusion of utility supplies in the calculation of the return statistics as soon as the relevant markets were deregulated, without reaching a consensus on the exact method of calculation. At its 296th meeting on the 18 June 2003, the Finance Committee approved a proposal to award a contract for the supply of electrical energy (CERN/FC/4693). The purpose of the proposal in this document is to clarify the way electrical energy will be included in future calculations of the return statistics. The Finance Committee is invited: 1. to agree that the full cost to CERN of electrical energy (excluding the cost of transport) be included in the Industrial Service return statistics; 2. to recommend that the Council approves the corresponding amendment to the Financial Rules set out in section 2 of this docum...

  1. The System of Indicators for the Statistical Evaluation of Market Conjuncture

    Directory of Open Access Journals (Sweden)

    Chernenko Daryna I.

    2017-04-01

    Full Text Available The article is aimed at systematizing and improving the system of statistical indicators for the market of laboratory health services (LHS and developing methods for their calculation. In the course of formation of the system of statistical indicators for the market of LHS, allocation of nine blocks has been proposed: market size; proportionality of market; market demand; market proposal; level and dynamics of prices; variation of the LHS; dynamics, development trends, and cycles of the market; market structure; level of competition and monopolization. The proposed system of statistical indicators together with methods for their calculation should ensure studying the trends and regularities in formation of the market for laboratory health services in Ukraine.

  2. Statistical Signal Process in R Language in the Pharmacovigilance Programme of India.

    Science.gov (United States)

    Kumar, Aman; Ahuja, Jitin; Shrivastava, Tarani Prakash; Kumar, Vipin; Kalaiselvan, Vivekanandan

    2018-05-01

    The Ministry of Health & Family Welfare, Government of India, initiated the Pharmacovigilance Programme of India (PvPI) in July 2010. The purpose of the PvPI is to collect data on adverse reactions due to medications, analyze it, and use the reference to recommend informed regulatory intervention, besides communicating the risk to health care professionals and the public. The goal of the present study was to apply statistical tools to find the relationship between drugs and ADRs for signal detection by R programming. Four statistical parameters were proposed for quantitative signal detection. These 4 parameters are IC 025 , PRR and PRR lb , chi-square, and N 11 ; we calculated these 4 values using R programming. We analyzed 78,983 drug-ADR combinations, and the total count of drug-ADR combination was 4,20,060. During the calculation of the statistical parameter, we use 3 variables: (1) N 11 (number of counts), (2) N 1. (Drug margin), and (3) N .1 (ADR margin). The structure and calculation of these 4 statistical parameters in R language are easily understandable. On the basis of the IC value (IC value >0), out of the 78,983 drug-ADR combination (drug-ADR combination), we found the 8,667 combinations to be significantly associated. The calculation of statistical parameters in R language is time saving and allows to easily identify new signals in the Indian ICSR (Individual Case Safety Reports) database.

  3. Additive manufacturing of a functionally graded material from Ti-6Al-4V to Invar: Experimental characterization and thermodynamic calculations

    International Nuclear Information System (INIS)

    Bobbio, Lourdes D.; Otis, Richard A.; Borgonia, John Paul; Dillon, R. Peter; Shapiro, Andrew A.; Liu, Zi-Kui; Beese, Allison M.

    2017-01-01

    In functionally graded materials (FGMs), the elemental composition, or structure, within a component varies gradually as a function of position, allowing for the gradual transition from one alloy to another, and the local tailoring of properties. One method for fabricating FGMs with varying elemental composition is through layer-by-layer directed energy deposition additive manufacturing. This work combines experimental characterization and computational analysis to investigate a material graded from Ti-6Al-4V to Invar 36 (64 wt% Fe, 36 wt% Ni). The microstructure, composition, phases, and microhardness were determined as a function of position within the FGM. During the fabrication process, detrimental phases associated with the compositional blending of the Ti-6Al-4V and Invar formed, leading to cracking in the final deposited part. Intermetallic phases (FeTi, Fe_2Ti, Ni_3Ti, and NiTi_2) were experimentally identified to occur throughout the gradient region, and were considered as the reason that the FGM cracked during fabrication. CALPHAD (CALculation of PHase Diagrams) thermodynamic calculations were used concurrently to predict phases that would form during the manufacturing process and were compared to the experimental results. The experimental-computational approach described herein for characterizing FGMs can be used to improve the understanding and design of other FGMs.

  4. Planar-channeling spatial density under statistical equilibrium

    International Nuclear Information System (INIS)

    Ellison, J.A.; Picraux, S.T.

    1978-01-01

    The phase-space density for planar channeled particles has been derived for the continuum model under statistical equilibrium. This is used to obtain the particle spatial probability density as a function of incident angle. The spatial density is shown to depend on only two parameters, a normalized incident angle and a normalized planar spacing. This normalization is used to obtain, by numerical calculation, a set of universal curves for the spatial density and also for the channeled-particle wavelength as a function of amplitude. Using these universal curves, the statistical-equilibrium spatial density and the channeled-particle wavelength can be easily obtained for any case for which the continuum model can be applied. Also, a new one-parameter analytic approximation to the spatial density is developed. This parabolic approximation is shown to give excellent agreement with the exact calculations

  5. The choice of statistical methods for comparisons of dosimetric data in radiotherapy

    International Nuclear Information System (INIS)

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-01-01

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman’s test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman’s rank and Kendall’s rank tests. The Friedman’s test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (−5 ± 4.4 SD) for MB and (−4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density

  6. Virtual modeling of polycrystalline structures of materials using particle packing algorithms and Laguerre cells

    Science.gov (United States)

    Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló

    2018-04-01

    The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.

  7. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  8. Consequences of discrepancies on verified material balances

    International Nuclear Information System (INIS)

    Jaech, J.L.; Hough, C.G.

    1983-01-01

    There exists a gap between the way item discrepancies that are found in an IAEA inspection are treated in practice and how they are treated in the IAEA Safeguards Technical Manual, Part F, Statistics. In the latter case, the existence of even a single item discrepancy is cause for rejection of the facility data. Probabilities of detection for given inspection plans are calculated based on this premise. In fact, although the existence of discrepancies may be so noted in inspection reports, they in no sense of the word lead to rejection of the facility data, i.e., to ''detection''. Clearly, however, discrepancies have an effect on the integrity of the material balance, and in fact, this effect may well be of dominant importance when compared to that of small measurement biases. This paper provides a quantitative evaluation of the effect of item discrepancies on the facility MUF. The G-circumflex statistic is introduced. It is analogous to the familiar D-circumflex statistic used to quantify the effects of small biases. Thus, just as (MUF-D-circumflex) is the facility MUF adjusted for the inspector's variables measurements, so is (MUF-D-circumflex-G-circumflex) the MUF adjusted for both the variables and attributes measurements, where it is the attributes inspection that detects item discrepancies. The distribution of (MUF-D-circumflex-G-circumflex) is approximated by a Pearson's distribution after finding the first four moments. Both the number of discrepancies and their size and sign distribution are treated as random variables. Assuming, then, that ''detection'' occurs when (MUF-D-circumflex-G-circumflex) differs significantly from zero, procedures for calculating effectiveness are derived. Some generic results on effectiveness are included. These results apply either to the case where (MUF-D-circumflex-G-circumflex) is treated as the single statistic, or to the two-step procedure in which the facility's data are first examined using (D-circumflex+G-circumflex) as

  9. A simplification of the likelihood ratio test statistic for testing ...

    African Journals Online (AJOL)

    The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...

  10. Tungsten Ions in Plasmas: Statistical Theory of Radiative-Collisional Processes

    Directory of Open Access Journals (Sweden)

    Alexander V. Demura

    2015-05-01

    Full Text Available The statistical model for calculations of the collisional-radiative processes in plasmas with tungsten impurity was developed. The electron structure of tungsten multielectron ions is considered in terms of both the Thomas-Fermi model and the Brandt-Lundquist model of collective oscillations of atomic electron density. The excitation or ionization of atomic electrons by plasma electron impacts are represented as photo-processes under the action of flux of equivalent photons introduced by E. Fermi. The total electron impact single ionization cross-sections of ions Wk+ with respective rates have been calculated and compared with the available experimental and modeling data (e.g., CADW. Plasma radiative losses on tungsten impurity were also calculated in a wide range of electron temperatures 1 eV–20 keV. The numerical code TFATOM was developed for calculations of radiative-collisional processes involving tungsten ions. The needed computational resources for TFATOM code are orders of magnitudes less than for the other conventional numerical codes. The transition from corona to Boltzmann limit was investigated in detail. The results of statistical approach have been tested by comparison with the vast experimental and conventional code data for a set of ions Wk+. It is shown that the universal statistical model accuracy for the ionization cross-sections and radiation losses is within the data scattering of significantly more complex quantum numerical codes, using different approximations for the calculation of atomic structure and the electronic cross-sections.

  11. A study of statistical tests for near-real-time materials accountancy using field test data of Tokai reprocessing plant

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Nishimura, Hideo; Ikawa, Koji; Miura, Nobuyuki; Iwanaga, Masayuki; Kusano, Toshitsugu.

    1988-03-01

    An Near-Real-Time Materials Accountancy(NRTA) system had been developed as an advanced safeguards measure for PNC Tokai Reprocessing Plant; a minicomputer system for NRTA data processing was designed and constructed. A full scale field test was carried out as a JASPAS(Japan Support Program for Agency Safeguards) project with the Agency's participation and the NRTA data processing system was used. Using this field test data, investigation of the detection power of a statistical test under real circumstances was carried out for five statistical tests, i.e., a significance test of MUF, CUMUF test, average loss test, MUF residual test and Page's test on MUF residuals. The result shows that the CUMUF test, average loss test, MUF residual test and the Page's test on MUF residual test are useful to detect a significant loss or diversion. An unmeasured inventory estimation model for the PNC reprocessing plant was developed in this study. Using this model, the field test data from the C-1 to 85 - 2 campaigns were re-analyzed. (author)

  12. Infinite slab-shield dose calculations

    International Nuclear Information System (INIS)

    Russell, G.J.

    1989-01-01

    I calculated neutron and gamma-ray equivalent doses leaking through a variety of infinite (laminate) slab-shields. In the shield computations, I used, as the incident neutron spectrum, the leakage spectrum (<20 MeV) calculated for the LANSCE tungsten production target at 90 degree to the target axis. The shield thickness was fixed at 60 cm. The results of the shield calculations show a minimum in the total leakage equivalent dose if the shield is 40-45 cm of iron followed by 20-15 cm of borated (5% B) polyethylene. High-performance shields can be attained by using multiple laminations. The calculated dose at the shield surface is very dependent on shield material. 4 refs., 4 figs., 1 tab

  13. To the analysis of the theory of mathematical model of hydrodynamics of a bulk layer of a mix of vegetative materials

    Directory of Open Access Journals (Sweden)

    S. A. Bikov

    2012-01-01

    Full Text Available The article presents the results of research work on finding out the interdependence between the dynamic separation of the working apparatus (machine, statistic separation and the degree of filling the apparatus (machine. The final mathematic model of calculating separation - an important hydrodynamic parameter of a layer of vegetable material while extragent is being filtrated through it. The authors worked out a universal method of defining hydrodynamic characteristics of a layer of material which can be applied to any vegetable materials and their mixtures worked up as required.

  14. Radioactivity of natural and artificial building materials - a comparative study.

    Science.gov (United States)

    Szabó, Zs; Völgyesi, P; Nagy, H É; Szabó, Cs; Kis, Z; Csorba, O

    2013-04-01

    Building materials and their additives contain radioactive isotopes, which can increase both external and internal radioactive exposures of humans. In this study Hungarian natural (adobe) and artificial (brick, concrete, coal slag, coal slag concrete and gas silicate) building materials were examined. We qualified 40 samples based on their radium equivalent, activity concentration, external hazard and internal hazard indices and the determined threshold values of these parameters. Absorbed dose rate and annual effective dose for inhabitants living in buildings made of these building materials were also evaluated. The calculations are based on (226)Ra, (232)Th and (40)K activity concentrations determined by gamma-ray spectrometry. Measured radionuclide concentrations and hence, calculated indices and doses of artificial building materials show a rather disparate distribution compared to adobes. The studied coal slag samples among the artificial building materials have elevated (226)Ra content. Natural, i.e. adobe and also brick samples contain higher amount of (40)K compared to other artificial building materials. Correlation coefficients among radionuclide concentrations are consistent with the values in the literature and connected to the natural geochemical behavior of U, Th and K elements. Seven samples (coal slag and coal slag concrete) exceed any of the threshold values of the calculated hazard indices, however only three of them are considered to be risky to use according to the fact that the building material was used in bulk amount or in restricted usage. It is shown, that using different indices can lead to different conclusions; hence we recommend considering more of the indices at the same time when building materials are studied. Additionally, adding two times their statistical uncertainties to their values before comparing to thresholds should be considered for providing a more conservative qualification. We have defined radon hazard portion to point

  15. Thermal conductance of the AlN/Si and AlN/SiC interfaces calculated with taking into account the detailed phonon spectra of the materials and the interface conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kazan, M. [LNIO, ICD, CNRS (FRE2848), Universite de Technologie de Troyes, 10010-Troyes (France); Pereira, S.; Correia, M.R. [CICECO and I3N, University of Aveiro, Aveiro-3810-193 (Portugal); Masri, P. [GES, CNRS-UMR 5650, Universite de Montpellier II, Montpellier-34095 (France)

    2010-01-15

    We present a calculation of the thermal conductance (TC) of the interface between aluminium nitride (AlN) and silicon (Si) and that between AlN and silicon carbide (SiC) with taking into account the detailed phonon spectra of the materials, as obtained from first principles calculations, and the interface conditions. On the basis of the results obtained, we discuss the relation between the interface TC, the interface conditions, and the mismatches between the acoustic waves velocities and the phonon densities of states of the materials in contact. Our calculation method is expected to provide a reliable tool for thermal management strategy, independently from the substrate choice (copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  16. Fisher statistics for analysis of diffusion tensor directional information.

    Science.gov (United States)

    Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

    2012-04-30

    A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. 1992 Energy statistics Yearbook

    International Nuclear Information System (INIS)

    1994-01-01

    The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from annual questionnaires distributed by the United Nations Statistical Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistical Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities

  18. To P or Not to P: Backing Bayesian Statistics.

    Science.gov (United States)

    Buchinsky, Farrel J; Chadha, Neil K

    2017-12-01

    In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.

  19. GRUCAL: a program system for the calculation of macroscopic group constants

    International Nuclear Information System (INIS)

    Woll, D.

    1984-01-01

    Nuclear reactor calculations require material- and composition-dependent, energy-averaged neutron physical data in order to decribe the interaction between neutrons and isotopes. The multigroup cross section code GRUCAL calculates these macroscopic group constants for given material compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but are read in from an instruction file. This makes it possible to adapt GRUCAL to various problems or different group constant concepts

  20. Ab Initio Prediction of Piezoelectricity in Two-Dimensional Materials.

    Science.gov (United States)

    Blonsky, Michael N; Zhuang, Houlong L; Singh, Arunima K; Hennig, Richard G

    2015-10-27

    Two-dimensional (2D) materials present many unique materials concepts, including material properties that sometimes differ dramatically from those of their bulk counterparts. One of these properties, piezoelectricity, is important for micro- and nanoelectromechanical systems applications. Using symmetry analysis, we determine the independent piezoelectric coefficients for four groups of predicted and synthesized 2D materials. We calculate with density-functional perturbation theory the stiffness and piezoelectric tensors of these materials. We determine the in-plane piezoelectric coefficient d11 for 37 materials within the families of 2D metal dichalcogenides, metal oxides, and III-V semiconductor materials. A majority of the structures, including CrSe2, CrTe2, CaO, CdO, ZnO, and InN, have d11 coefficients greater than 5 pm/V, a typical value for bulk piezoelectric materials. Our symmetry analysis shows that buckled 2D materials exhibit an out-of-plane coefficient d31. We find that d31 for 8 III-V semiconductors ranges from 0.02 to 0.6 pm/V. From statistical analysis, we identify correlations between the piezoelectric coefficients and the electronic and structural properties of the 2D materials that elucidate the origin of the piezoelectricity. Among the 37 2D materials, CdO, ZnO, and CrTe2 stand out for their combination of large piezoelectric coefficient and low formation energy and are recommended for experimental exploration.