WorldWideScience

Sample records for improved national calculation

  1. Science in Action: National Stormwater Calculator (SWC) ...

    Science.gov (United States)

    Stormwater discharges continue to cause impairment of our Nation’s waterbodies. Regulations that require the retention and/or treatment of frequent, small storms that dominate runoff volumes and pollutant loads are becoming more common. EPA has developed the National Stormwater Calculator (SWC) to help support local, state, and national stormwater management objectives to reduce runoff through infiltration and retention using green infrastructure practices as low impact development (LID) controls. To inform the public on what the Stormwater Calculator is used for.

  2. Improved core protection calculator system algorithm

    International Nuclear Information System (INIS)

    Yoon, Tae Young; Park, Young Ho; In, Wang Kee; Bae, Jong Sik; Baeg, Seung Yeob

    2009-01-01

    Core Protection Calculator System (CPCS) is a digitized core protection system which provides core protection functions based on two reactor core operation parameters, Departure from Nucleate Boiling Ratio (DNBR) and Local Power Density (LPD). It generates a reactor trip signal when the core condition exceeds the DNBR or LPD design limit. It consists of four independent channels which adapted a two out of four trip logic. CPCS algorithm improvement for the newly designed core protection calculator system, RCOPS (Reactor COre Protection System), is described in this paper. New features include the improvement of DNBR algorithm for thermal margin, the addition of pre trip alarm generation for auxiliary trip function, VOPT (Variable Over Power Trip) prevention during RPCS (Reactor Power Cutback System) actuation and the improvement of CEA (Control Element Assembly) signal checking algorithm. To verify the improved CPCS algorithm, CPCS algorithm verification tests, 'Module Test' and 'Unit Test', would be performed on RCOPS single channel facility. It is expected that the improved CPCS algorithm will increase DNBR margin and enhance the plant availability by reducing unnecessary reactor trips

  3. Improvements for Monte Carlo burnup calculation

    Energy Technology Data Exchange (ETDEWEB)

    Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)

    2015-07-01

    Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)

  4. NPP Krsko core calculations to improve operational safety

    International Nuclear Information System (INIS)

    Ivekovic, I.; Grgic, D.; Nemec, T.

    2007-01-01

    Calculation tools and methodology used to perform independent calculations of cumulative influence of different changes related to fuel and core operation of NPP Krsko were described. Some examples of steady state and transient results are used to illustrate potential improvements to understanding and reviewing plant safety. (author)

  5. Verification, validation, and field testing the USEPA National Stormwater Calculator

    Data.gov (United States)

    U.S. Environmental Protection Agency — We used this dataset to verify and validate functions in the USEPA National Stormwater Calculator, and then applied field data and commonly-available datasets to...

  6. NATIONAL STORMWATER CALCULATOR USER'S GUIDE ...

    Science.gov (United States)

    The National Stormwater Calculator is a simple to use tool for computing small site hydrology for any location within the US. It estimates the amount of stormwater runoff generated from a site under different development and control scenarios over a long term period of historical rainfall. The analysis takes into account local soil conditions, slope, land cover and meteorology. Different types of low impact development (LID) practices (also known as green infrastructure) can be employed to help capture and retain rainfall on-site. Future climate change scenarios taken from internationally recognized climate change projections can also be considered. The calculator provides planning level estimates of capital and maintenance costs which will allow planners and managers to evaluate and compare effectiveness and costs of LID controls.The calculator’s primary focus is informing site developers and property owners on how well they can meet a desired stormwater retention target. It can be used to answer such questions as:• What is the largest daily rainfall amount that can be captured by a site in either its pre-development, current, or post-development condition?• To what degree will storms of different magnitudes be captured on site?• What mix of LID controls can be deployed to meet a given stormwater retention target?• How well will LID controls perform under future meteorological projections made by global climate change models?• What are the relativ

  7. Preparation of functions of computer code GENGTC and improvement for two-dimensional heat transfer calculations for irradiation capsules

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Someya, Hiroyuki; Ito, Haruhiko.

    1992-11-01

    Capsules for irradiation tests in the JMTR (Japan Materials Testing Reactor), consist of irradiation specimens surrounded by a cladding tube, holders, an inner tube and a container tube (from 30mm to 65mm in diameter). And the annular gaps between these structural materials in the capsule are filled with liquids or gases. Cooling of the capsule is done by reactor primary coolant flowing down outside the capsule. Most of the heat generated by fission in fuel specimens and gamma absorption in structural materials is directed radially to the capsule container outer surface. In thermal performance calculations for capsule design, an one(r)-dimensional heat transfer computer code entitled (Generalyzed Gap Temperature Calculation), GENGTC, originally developed in Oak Ridge National Laboratory, U.S.A., has been frequently used. In designing a capsule, are needed many cases of parametric calculations with respect to changes materials and gap sizes. And in some cases, two(r,z)-dimensional heat transfer calculations are needed for irradiation test capsules with short length fuel rods. Recently the authors improved the original one-dimensional code GENGTC, (1) to simplify preparation of input data, (2) to perform automatic calculations for parametric survey based on design temperatures, ect. Moreover, the computer code has been improved to perform r-z two-dimensional heat transfer calculation. This report describes contents of the preparation of the one-dimensional code GENGTC and the improvement for the two-dimensional code GENGTC-2, together with their code manuals. (author)

  8. Improved g-level calculations for coil planet centrifuges.

    Science.gov (United States)

    van den Heuvel, Remco N A M; König, Carola S

    2011-09-09

    Calculation of the g-level is often used to compare CCC centrifuges, either against each other or to allow for comparison with other centrifugal techniques. This study shows the limitations of calculating the g-level in the traditional way. Traditional g-level calculations produce a constant value which does not accurately reflect the dynamics of the coil planet centrifuge. This work has led to a new equation which can be used to determine the improved non-dimensional values. The new equations describe the fluctuating radial and tangential g-level associated with CCC centrifuges and the mean radial g-level value. The latter has been found to be significantly different than that determined by the traditional equation. This new equation will give a better understanding of forces experienced by sample components and allows for more accurate comparison between centrifuges. Although the new equation is far better than the traditional equation for comparing different types of centrifuges, other factors such as the mixing regime may need to be considered to improve the comparison further. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Calculating a Continuous Metabolic Syndrome Score Using Nationally Representative Reference Values.

    Science.gov (United States)

    Guseman, Emily Hill; Eisenmann, Joey C; Laurson, Kelly R; Cook, Stephen R; Stratbucker, William

    2018-02-26

    The prevalence of metabolic syndrome in youth varies on the basis of the classification system used, prompting implementation of continuous scores; however, the use of these scores is limited to the sample from which they were derived. We sought to describe the derivation of the continuous metabolic syndrome score using nationally representative reference values in a sample of obese adolescents and a national sample obtained from National Health and Nutrition Examination Survey (NHANES) 2011-2012. Clinical data were collected from 50 adolescents seeking obesity treatment at a stage 3 weight management center. A second analysis relied on data from adolescents included in NHANES 2011-2012, performed for illustrative purposes. The continuous metabolic syndrome score was calculated by regressing individual values onto nationally representative age- and sex-specific standards (NHANES III). Resultant z scores were summed to create a total score. The final sample included 42 obese adolescents (15 male and 35 female subjects; mean age, 14.8 ± 1.9 years) and an additional 445 participants from NHANES 2011-2012. Among the clinical sample, the mean continuous metabolic syndrome score was 4.16 ± 4.30, while the NHANES sample mean was quite a bit lower, at -0.24 ± 2.8. We provide a method to calculate the continuous metabolic syndrome by comparing individual risk factor values to age- and sex-specific percentiles from a nationally representative sample. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  10. Student nurses need more than maths to improve their drug calculating skills.

    Science.gov (United States)

    Wright, Kerri

    2007-05-01

    Nurses need to be able to calculate accurate drug calculations in order to safely administer drugs to their patients (NMC, 2002). Studies have shown however that nurses do not always have the necessary skills to calculate accurate drug dosages and are potentially administering incorrect dosages of drugs to their patients (Hutton, M. 1998. Nursing Mathematics: the importance of application. Nursing Standard 13(11), 35-38; Kapborg, I. 1994. Calculation and administration of drug dosage by Swedish nurses, Student Nurses and Physicians. International Journal for Quality in Health Care 6(4), 389-395; O'Shea, E. 1999. Factors contributing to medication errors: a literature review. Journal of Advanced Nursing 8, 496-504; Wilson, A. 2003. Nurses maths: researching a practical approach. Nursing Standard 17(47), 33-36). The literature indicates that in order to improve drug calculations strategies need to focus on both the mathematical skills and conceptual skills of student nurses so they can interpret clinical data into drug calculations to be solved. A study was undertaken to investigate the effectiveness of implementing several strategies which focussed on developing the mathematical and conceptual skills of student nurses to improve their drug calculation skills. The study found that implementing a range of strategies which addressed these two developmental areas significantly improved the drug calculation skills of nurses. The study also indicates that a range of strategies has the potential ensuring that the skills taught are retained by the student nurses. Although the strategies significantly improved the drug calculation skills of student nurses, the fact that only 2 students were able to achieve 100% in their drug calculation test indicates a need for further research into this area.

  11. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    Science.gov (United States)

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p ultrasound, respectively (p power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  12. Improvements in EBR-2 core depletion calculations

    International Nuclear Information System (INIS)

    Finck, P.J.; Hill, R.N.; Sakamoto, S.

    1991-01-01

    The need for accurate core depletion calculations in Experimental Breeder Reactor No. 2 (EBR-2) is discussed. Because of the unique physics characteristics of EBR-2, it is difficult to obtain accurate and computationally efficient multigroup flux predictions. This paper describes the effect of various conventional and higher order schemes for group constant generation and for flux computations; results indicate that higher-order methods are required, particularly in the outer regions (i.e. the radial blanket). A methodology based on Nodal Equivalence Theory (N.E.T.) is developed which allows retention of the accuracy of a higher order solution with the computational efficiency of a few group nodal diffusion solution. The application of this methodology to three-dimensional EBR-2 flux predictions is demonstrated; this improved methodology allows accurate core depletion calculations at reasonable cost. 13 refs., 4 figs., 3 tabs

  13. Improvements in the model of neutron calculations for research reactors

    International Nuclear Information System (INIS)

    Calzetta, Osvaldo; Leszczynski, Francisco

    1987-01-01

    Within the research program in the field of neutron physics calculations being carried out in the Nuclear Engineering Division at the Centro Atomico Bariloche, the errors which due to some typical approximations appear in the final results are researched. For research MTR type reactors, two approximations, for high and low enrichment are investigated: the treatment of the geometry and the method of few-group cell cross-sections calculation, particularly in the resonance energy region. Commonly, the cell constants used for the entire reactor calculation are obtained making an homogenization of the full fuel elements, by one-dimensional calculations. An improvement is made that explicitly includes the fuel element frames in the core calculation geometry. Besides, a detailed treatment-in energy and space- is used to find the resonance few-group cross sections, and a comparison of the results with detailed and approximated calculations is made. The least number and the best mesh of energy groups needed for cell calculations is fixed too. (Author) [es

  14. National Emergency Preparedness and Response: Improving for Incidents of National Significance

    National Research Council Canada - National Science Library

    Clayton, Christopher M

    2006-01-01

    The national emergency management system has need of significant improvement in its contingency planning and early consolidation of effort and coordination between federal, state, and local agencies...

  15. National requirements for improved elevation data

    Science.gov (United States)

    Snyder, Gregory I.; Sugarbaker, Larry J.; Jason, Allyson L.; Maune, David F.

    2014-01-01

    This report presents the results of surveys, structured interviews, and workshops conducted to identify key national requirements for improved elevation data for the United States and its territories, including coastlines. Organizations also identified and reported the expected economic benefits that would be realized if their requirements for improved elevation were met (appendixes 1–3). This report describes the data collection methodology and summarizes the findings. Participating organizations included 34 Federal agencies, 50 States and two territories, and a sampling of local governments, tribes, and nongovernmental orgnizations. The nongovernmental organizations included The Nature Conservancy and a sampling of private sector businesses. These data were collected in 2010-2011 as part of the National Enhanced Elevation Assessment (NEEA), a study to identify program alternatives for better meeting the Nation’s elevation data needs. NEEA tasks included the collection of national elevation requirements; analysis of the benefits and costs of meeting these requirements; assessment of emerging elevation technologies, lifecycle data management needs, and costs for managing and distributing a national-scale dataset and derived products; and candidate national elevation program alternatives that balance costs and benefits in meeting the Nation’s elevation requirements. The NEEA was sponsored by the National Digital Elevation Program (NDEP), a government coordination body with the U.S. Geological Survey (USGS) as managing partner that includes the National Geospatial-Intelligence Agency (NGA), the Federal Emergency Management Agency (FEMA), the Natural Resources Conservation Service (NRCS), the U.S. Army Corps of Engineers (USACE), and the National Oceanic and Atmospheric Administration (NOAA), among the more than a dozen agencies and organizations. The term enhanced elevation data as used in this report refers broadly to three-dimensional measurements of land or

  16. Improved national calculation procedures to assess energy requirements, nitrogen and VS excretions of dairy cows in the German emission model GAS-EM

    DEFF Research Database (Denmark)

    Dämmgen, Ulrich; Haenel, Hans-Dieter; Rösemann, Claus

    2009-01-01

    The calculation module for the assessment of feed intake and excretion rates of dairy cows in the German agricultural emission model GAS-EM is described in detail. The module includes the description of methane emissions from enteric fermentation as well as the assessment of volatile solids...... matter intake. The results agree well with those obtained from regression models and respective experiments. The model is able to refl ect national and regional peculiarities in dairy cow husbandry. It is an adequate tool for the establishment of emission inventories and for the construction of scenarios...

  17. Improvement of calculation method for temperature coefficient of HTTR by neutronics calculation code based on diffusion theory. Analysis for temperature coefficient by SRAC code system

    International Nuclear Information System (INIS)

    Goto, Minoru; Takamatsu, Kuniyoshi

    2007-03-01

    The HTTR temperature coefficients required for the core dynamics calculations had been calculated from the HTTR core calculation results by the diffusion code with which the corrections had been performed using the core calculation results by the Monte-Carlo code MVP. This calculation method for the temperature coefficients was considered to have some issues to be improved. Then, the calculation method was improved to obtain the temperature coefficients in which the corrections by the Monte-Carlo code were not required. Specifically, from the point of view of neutron spectrum calculated by lattice calculations, the lattice model was revised which had been used for the calculations of the temperature coefficients. The HTTR core calculations were performed by the diffusion code with the group constants which were generated by the lattice calculations with the improved lattice model. The core calculations and the lattice calculations were performed by the SRAC code system. The HTTR core dynamics calculation was performed with the temperature coefficient obtained from the core calculation results. In consequence, the core dynamics calculation result showed good agreement with the experimental data and the valid temperature coefficient could be calculated only by the diffusion code without the corrections by Monte-Carlo code. (author)

  18. Recent improvements in the calculation of prompt fission neutron spectra: Preliminary results

    International Nuclear Information System (INIS)

    Madland, D.G.; LaBauve, R.J.; Nix, J.R.

    1989-01-01

    We consider three topics in the refinement and improvement of our original calculations of prompt fission neutron spectra. These are an improved calculation of the prompt fission neutron spectrum N(E) from the spontaneous fission of 252 Cf, a complete calculation of the prompt fission neutron spectrum matrix N(E,E n ) from the neutron-induced fission of 235 U, at incident neutron energies ranging from 0 to 15 MeV, and an assessment of the scission neutron component of the prompt fission neutron spectrum. Preliminary results will be presented and compared with experimental measurements and an evaluation. A suggestion is made for new integral cross section measurements. (author). 45 refs, 12 figs, 1 tab

  19. Improvement of methods for calculation of sound insulation in buildings

    OpenAIRE

    Mašović, Draško B.

    2015-01-01

    The main object of this work are the methods for calculation of sound insulation based on the classical model of sound propagation in buildings and single-number rating of sound insulation. The aim of the work is inspection of the possibilities for improvement of standard methods for quantification and calculation of sound insulation, in order to achieve higher accuracy of the obtained numerical values and their correlation with subjective impression of the acoustic comfort in buildings. Proc...

  20. Improvements in practical applicability of NSHEX: nodal transport calculation code for three-dimensional hexagonal-Z geometry

    International Nuclear Information System (INIS)

    Sugino, Kazuteru

    1998-07-01

    As a tool to perform a fast reactor core calculations with high accuracy, NSHEX the nodal transport calculation code for three-dimensional hexagonal-Z geometry is under development. To improve the practical applicability of NSHEX, for instance, in its application to safety analysis and commercial reactor core design studies, we investigated the basic theory used in it, improved the program performance, and evaluated its applicability to the analysis of commercial reactor cores. The current studies show the following: (1) An improvement in the treatment of radial leakage in the radial nodal coupling equation bettered calculational convergence for safety analysis calculation, so the applicability of NSHEX to safety analysis was improved. (2) As a result of comparison of results from NSHEX and the standard core calculation code, it was confirmed that there was consistency between them. (3) According to the evaluation of the effect due to the difference of calculational condition, it was found that the calculation under appropriate nodal expansion orders and Sn orders correspond to the one under most detailed condition. However further investigation is required to reduce the uncertainty in calculational results due to the treatment of high order flux moments. (4) A whole core version of NSHEX enabling calculation for any FBR core geometry has been developed, this improved general applicability of NSHEX. (5) An investigation of the applicability of the rebalance method to acceleration clarified that this improved calculational convergence and it was effective. (J.P.N.)

  1. Improved guidelines for RELAP4/MOD6 reflood calculations

    International Nuclear Information System (INIS)

    Chen, T.H.; Fletcher, C.D.

    1980-01-01

    Computer simulations were performed for an extensive selection of forced- and gravity-feed reflood experiments. This effort was a portion of the assessment procedure for the RELAP4/MOD6 thermal hydraulic computer code. A common set of guidelines, based on recommendations from the code developers, was used in determining the model and user-selected input options for each calculation. The comparison of code-calculated and experimental data was then used to assess the capability of the RELAP4/MOD6 code to model the reflood phenomena. As a result of the assessment, the guidelines for determining the user-selected input options were improved

  2. 76 FR 1592 - National Poultry Improvement Plan; General Conference Committee Meeting

    Science.gov (United States)

    2011-01-11

    ...] National Poultry Improvement Plan; General Conference Committee Meeting AGENCY: Animal and Plant Health... General Conference Committee of the National Poultry Improvement Plan. DATES: The General Conference... Improvement Plan, VS, APHIS, 1498 Klondike Road, Suite 101, Conyers, GA 30094-5104; (770) 922-3496...

  3. The benefits of improved national elevation data

    Science.gov (United States)

    Snyder, Gregory I.

    2013-01-01

    This article describes how the National Enhanced Elevation Assessment (NEEA) has identified substantial benefits that could come about if improved elevation data were publicly available for current and emerging applications and business uses such as renewable energy, precision agriculture, and intelligent vehicle navigation and safety. In order to support these diverse needs, new national elevation data with higher resolution and accuracy are needed. The 3D Elevation Program (3DEP) initiative was developed to meet the majority of these needs and it is expected that 3DEP will result in new, unimagined information services that would result in job growth and the transformation of the geospatial community. Private-sector data collection companies are continuously evolving sensors and positioning technologies that are needed to collect improved elevation data. An initiative of this scope might also provide an opportunity for companies to improve their capabilities and produce even higher data quality and consistency at a pace that might not have otherwise occurred.

  4. Use of integral experiments to improve neutron propagation and gamma heating calculations

    International Nuclear Information System (INIS)

    Oceraies, Y.; Caumette, P.; Devillers, C.; Bussac, J.

    1979-01-01

    1) The studies to define and improve the accuracies of neutron propagation and gamma heating calculations from integral experiments are encompassed in the field of the fast reactor physics program at CEA. 2) A systematic analysis of neutron propagation in Fe-Na clean media, with variable volumic composition between 0 and 100% in sodium, has been performed on the HARMONIE source reactor. Gamma heating traverses in the core, the blankets and several control rods, have been measured in the R Z core program at MASURCA. The experimental techniques, the accuracies and the results obtained are given. The approximations of the calculational methods used to analyse these experiments and to predict the corresponding design parameters are also described. 3) Particular emphasis is given to the methods planned to improve fundamental data used in neutron propagation calculations, using the discrepancies observed between measured and calculated results in clean integral experiments. One of these approaches, similar to the techniques used in core physics, relies upon sensitivity studies and eventually on adjustment techniques applied to neutron propagation. (author)

  5. Updating and improving the National Population Database to National Population Database 2

    OpenAIRE

    SMITH, Graham; FAIRBURN, Jonathan

    2008-01-01

    In 2004 Staffordshire University delivered the National Population Database for use in estimating populations at risk under the Control of Major Accident Hazards Regulations (COMAH). In 2006 an assessment of the updating and potential improvements to NPD was delivered to HSE. Between Autumn 2007 and Summer 2008 an implementation of the feasibility report led to the creation of National Population Database 2 which both updated and expanded the datasets contained in the original NPD. This repor...

  6. Improving MODPRESS heat loss calculations for PWR pressurizers

    International Nuclear Information System (INIS)

    Ramos, Natalia V.; Lira, Carlos A. Brayner O.; Castrillho, Lazara S.

    2009-01-01

    The improvement of heat loss calculations in MODPRESS transient code for PWR pressurizer analysis is the main focus of this investigation. Initially, a heat loss model was built based on heat transfer coefficient (HTC) correlations obtained in handbooks of thermal engineering. A hand calculation for Neptunus experimental test number U47 yielded a thermal power loss of 11.2 kW against 17.3 kW given by MODPRESS at the same conditions, while the experimental estimate is given as 17 kW. This comparison is valid only for steady state or before starting the transient experiment, because MODPRESS does not update HTC's when the transient phase begins. Furthermore, it must be noted that MODPRESS heat transfer coefficients are adjusted to reproduce the experimental value of the specific type of pressurizer. After inserting the new routine for HTC's into MODPRESS, the heat loss was calculated as 11.4 kW, a value very close to the first estimate but far below 17 kW found in the U47 experiment. In this paper, the heat loss model and results will be described. Further research is being developed to find a more general HTC that allows the analysis of the effects of heat losses on transient behavior of Neptunus and IRIS pressurizers. (author)

  7. Nationwide quality improvement of cholecystectomy: results from a national database

    DEFF Research Database (Denmark)

    Harboe, Kirstine M; Bardram, Linda

    2011-01-01

    To evaluate whether quality improvements in the performance of cholecystectomy have been achieved in Denmark since 2006, after revision of the Danish National Guidelines for treatment of gallstones.......To evaluate whether quality improvements in the performance of cholecystectomy have been achieved in Denmark since 2006, after revision of the Danish National Guidelines for treatment of gallstones....

  8. 77 FR 1051 - General Conference Committee of the National Poultry Improvement Plan; Meeting

    Science.gov (United States)

    2012-01-09

    ...] General Conference Committee of the National Poultry Improvement Plan; Meeting AGENCY: Animal and Plant... the General Conference Committee of the National Poultry Improvement Plan. DATES: The meeting will be... INFORMATION CONTACT: Dr. C. Stephen Roney, Senior Coordinator, National Poultry Improvement Plan, VS, APHIS...

  9. Neutronics calculations for the Oak Ridge National Laboratory Tokamak Reactor Studies

    International Nuclear Information System (INIS)

    Santoro, R.T.; Baker, V.C.; Barnes, J.M.

    1976-01-01

    Neutronics calculations have been carried out to analyze the nuclear performance of conceptual blanket and shield designs for the Tokamak Experimental Power Reactor (EPR) and the Tokamak Demonstration Reactor Plant (DRP) being considered at the Oak Ridge National Laboratory. These reactor designs represent a sequence in the commercialization of fusion-generated electrical power. All of the calculations were carried out using the one-dimensional discrete ordinates code ANISN and the latest available ENDF/B-IV coupled neutron-gamma-ray transport cross-section data, fluence-to-kerma conversion factors, and radiation damage cross-section data. The calculations include spatial and integral heating-rate estimates in the reactor with emphasis on the recovery of fusion neutron energy in the blanket and limiting the heat-deposition rate in the superconducting toroidal field coils. Radiation damage due to atomic displacements and gas production produced in the reactor structural material and in the toroidal field coil windings were also estimated. The tritium-breeding ratio when natural lithium is used as the fertile material in the DRP blanket and in the experimental breeding modules in the EPR is also given

  10. Improved calculation of the equilibrium magnetization of arterial blood in arterial spin labeling

    DEFF Research Database (Denmark)

    Ahlgren, André; Wirestam, Ronnie; Knutsson, Linda

    2018-01-01

    PURPOSE: To propose and assess an improved method for calculating the equilibrium magnetization of arterial blood ( M0a), used for calibration of perfusion estimates in arterial spin labeling. METHODS: Whereas standard M0a calculation is based on dividing a proton density-weighted image by an ave...

  11. Monte Carlo dose calculation improvements for low energy electron beams using eMC

    International Nuclear Information System (INIS)

    Fix, Michael K; Frei, Daniel; Volken, Werner; Born, Ernst J; Manser, Peter; Neuenschwander, Hans

    2010-01-01

    The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm 2 of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d max and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm 2 at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose calculation

  12. Monte Carlo dose calculation improvements for low energy electron beams using eMC.

    Science.gov (United States)

    Fix, Michael K; Frei, Daniel; Volken, Werner; Neuenschwander, Hans; Born, Ernst J; Manser, Peter

    2010-08-21

    The electron Monte Carlo (eMC) dose calculation algorithm in Eclipse (Varian Medical Systems) is based on the macro MC method and is able to predict dose distributions for high energy electron beams with high accuracy. However, there are limitations for low energy electron beams. This work aims to improve the accuracy of the dose calculation using eMC for 4 and 6 MeV electron beams of Varian linear accelerators. Improvements implemented into the eMC include (1) improved determination of the initial electron energy spectrum by increased resolution of mono-energetic depth dose curves used during beam configuration; (2) inclusion of all the scrapers of the applicator in the beam model; (3) reduction of the maximum size of the sphere to be selected within the macro MC transport when the energy of the incident electron is below certain thresholds. The impact of these changes in eMC is investigated by comparing calculated dose distributions for 4 and 6 MeV electron beams at source to surface distance (SSD) of 100 and 110 cm with applicators ranging from 6 x 6 to 25 x 25 cm(2) of a Varian Clinac 2300C/D with the corresponding measurements. Dose differences between calculated and measured absolute depth dose curves are reduced from 6% to less than 1.5% for both energies and all applicators considered at SSD of 100 cm. Using the original eMC implementation, absolute dose profiles at depths of 1 cm, d(max) and R50 in water lead to dose differences of up to 8% for applicators larger than 15 x 15 cm(2) at SSD 100 cm. Those differences are now reduced to less than 2% for all dose profiles investigated when the improved version of eMC is used. At SSD of 110 cm the dose difference for the original eMC version is even more pronounced and can be larger than 10%. Those differences are reduced to within 2% or 2 mm with the improved version of eMC. In this work several enhancements were made in the eMC algorithm leading to significant improvements in the accuracy of the dose

  13. Managing Uncertainty in Runoff Estimation with the U.S. Environmental Protection Agency National Stormwater Calculator.

    Science.gov (United States)

    The U.S. Environmental Protection Agency National Stormwater Calculator (NSWC) simplifies the task of estimating runoff through a straightforward simulation process based on the EPA Stormwater Management Model. The NSWC accesses localized climate and soil hydrology data, and opti...

  14. National Quality Improvement Center on Early Childhood

    Science.gov (United States)

    Browne, Charlyn Harper

    2014-01-01

    The national Quality Improvement Center on early Childhood (QIC-eC) funded four research and demonstration projects that tested child maltreatment prevention approaches. The projects were guided by several key perspectives: the importance of increasing protective factors in addition to decreasing risk factors in child maltreatment prevention…

  15. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  16. Report: EPA Improved Its National Security Information Program, but Some Improvements Still Needed

    Science.gov (United States)

    Report #16-P-0196, June 2, 2016. The EPA will continue to improve its national security information program by completing information classification guides that can be used uniformly and consistently throughout the agency.

  17. 75 FR 70712 - General Conference Committee of the National Poultry Improvement Plan; Reestablishment

    Science.gov (United States)

    2010-11-18

    ...] General Conference Committee of the National Poultry Improvement Plan; Reestablishment AGENCY: Animal and... Poultry Improvement Plan (Committee) for a 2-year period. The Secretary of Agriculture has determined that.... Rhorer, Senior Coordinator, National Poultry Improvement Plan, VS, APHIS, USDA, Suite 101, 1498 Klondike...

  18. National, ready-to-use climate indicators calculation and dissemination

    Science.gov (United States)

    Desiato, F.; Fioravanti, G.; Fraschetti, P.; Perconti, W.; Toreti, A.

    2010-09-01

    In Italy, meteorological data necessary and useful for climate studies are collected, processed and archived by a wide range of national and regional institutions. As a result, the density of the stations, the length and frequency of the observations, the quality control procedures and the database structure vary from one dataset to the other. In order to maximize the use of those data for climate knowledge and climate change assessments, a computerized system for the collection, quality control, calculation, regular update and rapid dissemination of climate indicators (denominated SCIA) was developed. Along with the pieces of information provided by complete metadata, climate indicators consist of statistics (mean, extremes, date of occurrence, standard deviation) over ten-days, monthly and yearly time periods of meteorological variables, including temperature, precipitation, humidity, wind, water balance, evapotranspitaton, degree-days, cloud cover, sea level pressure, solar radiation. In addition, normal values over thirty-year reference climatological periods and yearly anomalies are calculated and made available. All climate indicators, as well as their time series at a single location or spatial distribution at a selected time, are available through a dedicated web site (www.scia.sinanet.apat.it). In addition, secondary products like high resolution temperature maps obtained by kriging spatial interpolation, are made available. Over the last three years, about 40000 visitors accessed to the SCIA web site, with an average of 45 visitors per day. Most frequent visitors belong to categories like universities and research institutes; private companies and general public are present as well. Apart from research purposes, climate indicators disseminated through SCIA may be used in several socio-economic sectors like energy consumption, water management, agriculture, tourism and health. With regards to our activity, we base on these indicators for the estimation of

  19. Improved method for calculating neoclassical transport coefficients in the banana regime

    Energy Technology Data Exchange (ETDEWEB)

    Taguchi, M., E-mail: taguchi.masayoshi@nihon-u.ac.jp [College of Industrial Technology, Nihon University, Narashino 275-8576 (Japan)

    2014-05-15

    The conventional neoclassical moment method in the banana regime is improved by increasing the accuracy of approximation to the linearized Fokker-Planck collision operator. This improved method is formulated for a multiple ion plasma in general tokamak equilibria. The explicit computation in a model magnetic field shows that the neoclassical transport coefficients can be accurately calculated in the full range of aspect ratio by the improved method. The some neoclassical transport coefficients for the intermediate aspect ratio are found to appreciably deviate from those obtained by the conventional moment method. The differences between the transport coefficients with these two methods are up to about 20%.

  20. Declination Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  1. Improving the accuracy of dynamic mass calculation

    Directory of Open Access Journals (Sweden)

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  2. Improved method of generating bit reversed numbers for calculating fast fourier transform

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.

    Fast Fourier Transform (FFT) is an important tool required for signal processing in defence applications. This paper reports an improved method for generating bit reversed numbers needed in calculating FFT using radix-2. The refined algorithm takes...

  3. 77 FR 42257 - General Conference Committee of the National Poultry Improvement Plan; Solicitation for Membership

    Science.gov (United States)

    2012-07-18

    ...] General Conference Committee of the National Poultry Improvement Plan; Solicitation for Membership AGENCY... regional membership for the General Conference Committee of the National Poultry Improvement Plan. DATES... INFORMATION CONTACT: Dr. C. Stephen Roney, Senior Coordinator, National Poultry Improvement Plan, VS, APHIS...

  4. Improved stiffness confinement method within the coarse mesh finite difference framework for efficient spatial kinetics calculation

    International Nuclear Information System (INIS)

    Park, Beom Woo; Joo, Han Gyu

    2015-01-01

    Highlights: • The stiffness confinement method is combined with multigroup CMFD with SENM nodal kernel. • The systematic methods for determining the shape and amplitude frequencies are established. • Eigenvalue problems instead of fixed source problems are solved in the transient calculation. • It is demonstrated that much larger time step sizes can be used with the SCM–CMFD method. - Abstract: An improved Stiffness Confinement Method (SCM) is formulated within the framework of the coarse mesh finite difference (CMFD) formulation for efficient multigroup spatial kinetics calculation. The algorithm for searching for the amplitude frequency that makes the dynamic eigenvalue unity is developed in a systematic way along with the methods for determining the shape and precursor frequencies. A nodal calculation scheme is established within the CMFD framework to incorporate the cross section changes due to thermal feedback and dynamic frequency update. The conditional nodal update scheme is employed such that the transient calculation is performed mostly with the CMFD formulation and the CMFD parameters are conditionally updated by intermittent nodal calculations. A quadratic representation of amplitude frequency is introduced as another improvement. The performance of the improved SCM within the CMFD framework is assessed by comparing the solution accuracy and computing times for the NEACRP control rod ejection benchmark problems with those obtained with the Crank–Nicholson method with exponential transform (CNET). It is demonstrated that the improved SCM is beneficial for large time step size calculations with stability and accuracy enhancement

  5. Relationship Between the Remaining Years of Healthy Life Expectancy in Older Age and National Income Level, Educational Attainment, and Improved Water Quality.

    Science.gov (United States)

    Kim, Jong In; Kim, Gukbin

    2016-10-01

    The remaining years of healthy life expectancy (RYH) at age 65 years can be calculated as RYH (65) = healthy life expectancy-aged 65 years. This study confirms the associations between socioeconomic indicators and the RYH (65) in 148 countries. The RYH data were obtained from the World Health Organization. Significant positive correlations between RYH (65) in men and women and the socioeconomic indicators national income, education level, and improved drinking water were found. Finally, the predictors of RYH (65) in men and women were used to build a model of the RYH using higher socioeconomic indicators (R(2 )= 0.744, p educational attainment, national income level, and improved water quality influenced the RYH at 65 years. Therefore, policymaking to improve these country-level socioeconomic factors is expected to have latent effects on RYH in older age. © The Author(s) 2016.

  6. Examining Calculator Use among Students with and without Disabilities Educated with Different Mathematical Curricula

    Science.gov (United States)

    Bouck, Emily C.; Joshi, Gauri S.; Johnson, Linley

    2013-01-01

    This study assessed if students with and without disabilities used calculators (fourfunction, scientific, or graphing) to solve mathematics assessment problems and whether using calculators improved their performance. Participants were sixth and seventh-grade students educated with either National Science Foundation (NSF)-funded or traditional…

  7. Improvements to the nuclear model code GNASH for cross section calculations at higher energies

    International Nuclear Information System (INIS)

    Young, P.G.; Chadwick, M.B.

    1994-01-01

    The nuclear model code GNASH, which in the past has been used predominantly for incident particle energies below 20 MeV, has been modified extensively for calculations at higher energies. The model extensions and improvements are described in this paper, and their significance is illustrated by comparing calculations with experimental data for incident energies up to 160 MeV

  8. CALCULATION OF MAGNETIC CHARACTERISTICS OF TRACTION ELECTRIC ENGINE WITH THE USE OF IMPROVED UNIVERSAL MAGNETIC CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    A. Y. Drubetskyi

    2017-06-01

    Full Text Available Purpose. The article is aimed to develop a technique for calculating the magnetic characteristics of uncompensated traction electric motors (TEM at any degree of attenuation of excitation based on the approximating expression for improved universal magnetic characteristics (UMC. It is also necessary to conduct an analysis of expressions for improved UMC with the aim of finding an expression that most fully satisfies the requirements for developing a technique for determining the inductive parameters of TEM. Methodology. It is necessary to determine the saturation coefficient for each degree of attenuation of the excitation for building the characteristics with the improved UMC. This can only be done analytically. To simplify the analytical finding of the saturation coefficient, the method based on solving a system of two equations is proposed, one of which is UMC itself, and the second one is a straight line whose angular coefficient is proportional to the saturation coefficient. Resulting values of the saturation coefficient for the excitation degrees β < 1 are essentially the coefficients of the shape of the magnetic characteristic. To get rid of the need to determine the coefficients of approximation each time in the calculation of characteristics a form of improved UMC is proposed, in which the magnetomotive force (MMF of the excitation winding serves as the argument's role. Findings. Using the improved UMC it is possible to calculate the characteristics of uncompensated TEMs for any degree of attenuation of excitation. The accuracy of the calculation at β = 1 does not differ from that in the calculation for UMC, proposed by Prof. M. D. Nakhodkin. The same accuracy is preserved at excitation degrees that are different from unity. Originality. An analytical technique for calculating the magnetic (speed characteristics of uncompensated TEM for any degree of attenuation with the help of an improved UMC is proposed. The analytical technique

  9. Monte Carlo calculated CT numbers for improved heavy ion treatment planning

    Directory of Open Access Journals (Sweden)

    Qamhiyeh Sima

    2014-03-01

    Full Text Available Better knowledge of CT number values and their uncertainties can be applied to improve heavy ion treatment planning. We developed a novel method to calculate CT numbers for a computed tomography (CT scanner using the Monte Carlo (MC code, BEAMnrc/EGSnrc. To generate the initial beam shape and spectra we conducted full simulations of an X-ray tube, filters and beam shapers for a Siemens Emotion CT. The simulation output files were analyzed to calculate projections of a phantom with inserts. A simple reconstruction algorithm (FBP using a Ram-Lak filter was applied to calculate the pixel values, which represent an attenuation coefficient, normalized in such a way to give zero for water (Hounsfield unit (HU. Measured and Monte Carlo calculated CT numbers were compared. The average deviation between measured and simulated CT numbers was 4 ± 4 HU and the standard deviation σ was 49 ± 4 HU. The simulation also correctly predicted the behaviour of H-materials compared to a Gammex tissue substitutes. We believe the developed approach represents a useful new tool for evaluating the effect of CT scanner and phantom parameters on CT number values.

  10. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tippayakul, C.; Ivanov, K. [Pennsylvania State Univ., Univ. Park (United States); Misu, S. [AREVA NP GmbH, An AREVA and SIEMENS Company, Erlangen (Germany)

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)

  11. AN IMPROVEMENT ON MASS CALCULATIONS OF SOLAR CORONAL MASS EJECTIONS VIA POLARIMETRIC RECONSTRUCTION

    International Nuclear Information System (INIS)

    Dai, Xinghua; Wang, Huaning; Huang, Xin; Du, Zhanle; He, Han

    2015-01-01

    The mass of a coronal mass ejection (CME) is calculated from the measured brightness and assumed geometry of Thomson scattering. The simplest geometry for mass calculations is to assume that all of the electrons are in the plane of the sky (POS). With additional information like source region or multiviewpoint observations, the mass can be calculated more precisely under the assumption that the entire CME is in a plane defined by its trajectory. Polarization measurements provide information on the average angle of the CME electrons along the line of sight of each CCD pixel from the POS, and this can further improve the mass calculations as discussed here. A CME event initiating on 2012 July 23 at 2:20 UT observed by the Solar Terrestrial Relations Observatory is employed to validate our method

  12. A contemporary analysis of Fournier gangrene using the National Surgical Quality Improvement Program.

    Science.gov (United States)

    Kim, Stanley Y; Dupree, James M; Le, Brian V; Kim, Dae Y; Zhao, Lee C; Kundu, Shilajit D

    2015-05-01

    To determine a nationwide contemporary description of surgical Fournier gangrene (FG) and necrotizing fasciitis of the genitalia (NFG) outcomes because historically reported mortality rates for FG and NFG are based on small single-institution studies from the 1980s and the 1990s. The National Surgical Quality Improvement Program is a risk-adjusted surgical database used by nearly 400 hospitals nationwide, which tracks preoperative, intraoperative, and 30-day postoperative clinical variables. Data are extracted from patient charts by an independent surgical clinical reviewer at each hospital. Using the National Surgical Quality Improvement Program data from 2005 to 2009, we calculated 30-day mortality rates and identified preoperative factors associated with increased mortality. A total of 650 patients were identified with surgery for FG or NFG. Fourteen patients with do not resuscitate orders placed preoperatively were excluded from analyses. For the remaining 636 patients, the overall 30-day mortality was 10.1% (64 of 636). Fifty-seven percent of patients (360 of 636) were men, 70% (446 of 636) were white, and 13% (81 of 636) were African American. Multivariate logistic regression indicated that increased age (odds ratio [OR], 1.041; P = .004), body mass index (OR, 1.045; P <.001), and preoperative white blood cell count (OR, 1.061; P = .001), and decreased platelet count (OR, 0.993; P <.001) were all associated with increased risk of death. We determined a surgical mortality rate for FG-NFG of 10.1%. This rate is about half of historically published estimates and similar to recent studies. The lower rate may indicate improvements in therapy. Increased age, body mass index, and white blood cell count, and decreased platelet count were all associated with an increased risk of 30-day mortality. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Magnetic Field Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...

  14. An improved geometric algorithm for calculating the topology of lattice gauge fields

    International Nuclear Information System (INIS)

    Pugh, D.J.R.; Teper, M.; Oxford Univ.

    1989-01-01

    We implement the algorithm of Phillips and Stone on a hypercubic, periodic lattice and show that at currently accessible couplings the SU(2) topological charge so calculated is dominated by short-distance fluctuations. We propose and test an improvement to rid the measure of such lattice artifacts. We find that the improved algorithm produces a topological susceptibility that is consistent with that obtained by the alternative cooling method, thus resolving the controversial discrepancy between geometric and cooling methods. We briefly discuss the reasons for this and point out that our improvement is likely to be particularly effective when applied to the case of SU(3). (orig.)

  15. 77 FR 46658 - Proposed Priority; Technical Assistance To Improve State Data Capacity-National Technical...

    Science.gov (United States)

    2012-08-06

    ... Assistance To Improve State Data Capacity--National Technical Assistance Center To Improve State Capacity To... and later years. We take this action to focus attention on an identified national need to provide TA to improve the capacity of States to meet the data collection requirements of the Individuals with...

  16. 75 FR 23222 - National Poultry Improvement Plan; General Conference Committee Meeting and 40th Biennial Conference

    Science.gov (United States)

    2010-05-03

    ...] National Poultry Improvement Plan; General Conference Committee Meeting and 40th Biennial Conference AGENCY... notice of a meeting of the General Conference Committee of the National Poultry Improvement Plan (NPIP... Coordinator, National Poultry Improvement Plan, VS, APHIS, 1498 Klondike Road, Suite 101, Conyers, GA 30094...

  17. 77 FR 46374 - National Poultry Improvement Plan; General Conference Committee Meeting and 41st Biennial Conference

    Science.gov (United States)

    2012-08-03

    ...] National Poultry Improvement Plan; General Conference Committee Meeting and 41st Biennial Conference AGENCY... notice of a meeting of the General Conference Committee of the National Poultry Improvement Plan (NPIP... CONTACT: Dr. C. Stephen Roney, Senior Coordinator, National Poultry Improvement Plan, VS, APHIS, 1506...

  18. Improved calculation of displacements per atom cross section in solids by gamma and electron irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Piñera, Ibrahin, E-mail: ipinera@ceaden.edu.cu [Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, CEADEN, 30 St. 502, Playa 11300, Havana (Cuba); Cruz, Carlos M.; Leyva, Antonio; Abreu, Yamiel; Cabal, Ana E. [Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear, CEADEN, 30 St. 502, Playa 11300, Havana (Cuba); Espen, Piet Van; Remortel, Nick Van [University of Antwerp, CGB, Groenenborgerlaan 171, 2020 Antwerpen (Belgium)

    2014-11-15

    Highlights: • We present a calculation procedure for dpa cross section in solids under irradiation. • Improvement about 10–90% for the gamma irradiation induced dpa cross section. • Improvement about 5–50% for the electron irradiation induced dpa cross section. • More precise results (20–70%) for thin samples irradiated with electrons. - Abstract: Several authors had estimated the displacements per atom cross sections under different approximations and models, including most of the main gamma- and electron-material interaction processes. These previous works used numerical approximation formulas which are applicable for limited energy ranges. We proposed the Monte Carlo assisted Classical Method (MCCM), which relates the established theories about atom displacements to the electron and positron secondary fluence distributions calculated from the Monte Carlo simulation. In this study the MCCM procedure is adapted in order to estimate the displacements per atom cross sections for gamma and electron irradiation. The results obtained through this procedure are compared with previous theoretical calculations. An improvement in about 10–90% for the gamma irradiation induced dpa cross section is observed in our results on regard to the previous evaluations for the studied incident energies. On the other hand, the dpa cross section values produced by irradiation with electrons are improved by our calculations in about 5–50% when compared with the theoretical approximations. When thin samples are irradiated with electrons, more precise results are obtained through the MCCM (in about 20–70%) with respect to the previous studies.

  19. How to Improve the Quality of Screening Endoscopy in Korea: National Endoscopy Quality Improvement Program.

    Science.gov (United States)

    Cho, Yu Kyung

    2016-07-01

    In Korea, gastric cancer screening, either esophagogastroduodenoscopy or upper gastrointestinal series (UGIS), is performed biennially for adults aged 40 years or older. Screening endoscopy has been shown to be associated with localized cancer detection and better than UGIS. However, the diagnostic sensitivity of detecting cancer is not satisfactory. The National Endoscopy Quality Improvement (QI) program was initiated in 2009 to enhance the quality of medical institutions and improve the effectiveness of the National Cancer Screening Program (NCSP). The Korean Society of Gastrointestinal Endoscopy developed quality standards through a broad systematic review of other endoscopic quality guidelines and discussions with experts. The standards comprise five domains: qualifications of endoscopists, endoscopic unit facilities and equipment, endoscopic procedure, endoscopy outcomes, and endoscopic reprocessing. After 5 years of the QI program, feedback surveys showed that the perception of QI and endoscopic practice improved substantially in all domains of quality, but the quality standards need to be revised. How to avoid missing cancer in endoscopic procedures in daily practice was reviewed, which can be applied to the mass screening endoscopy. To improve the quality and effectiveness of NCSP, key performance indicators, acceptable quality standards, regular audit, and appropriate reimbursement are necessary.

  20. Improving Geography Learning in the Schools: Efforts by the National Geographic Society.

    Science.gov (United States)

    Dulli, Robert E.

    1994-01-01

    Contends that the National Geographic Society's Geography Education Program continues to work on improving geography instruction and learning. Outlines future activities of the National Geographic Society including urban outreach and technology training. (CFR)

  1. UKAEA calculations for German National Problem 7 - blind predictions of the REBEKA-6 clad ballooning experiments

    International Nuclear Information System (INIS)

    Sweet, D.W.; Haste, T.J.

    1983-08-01

    The REBEKA-6 clad ballooning experiment has been chosen as the basis of a CSNI Open International Standard Problem (ISP14). The test, which was carried out at KfK, Karlsruhe in March 1983, has also been adopted as a Blind German National Problem (DSP7) and this exercise has been extended to include interested organisations outside the FDR. The UKAEA has completed a set of calculations with the intention of contributing to DSP7 but has not formally submitted these because of reservations regarding the problem specification. This memorandum provides a record of the calculations and summarises the difficulties encountered. (author)

  2. Multi-CPU plasma fluid turbulence calculations on a CRAY Y-MP C90

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Leboeuf, J.N.; Curtis, B.C.; Troutman, R.L.

    1993-01-01

    Significant improvements in real-time efficiency have been obtained for plasma fluid turbulence calculations by microtasking the nonlinear fluid code KITE in which they are implemented on the CRAY Y-MP C90 at the National Energy Research Supercomputer Center (NERSC). The number of processors accessed concurrently scales linearly with problem size. Close to six concurrent processors have so far been obtained with a three-dimensional nonlinear production calculation at the currently allowed memory size of 80 Mword. With a calculation size corresponding to the maximum allowed memory of 200 Mword in the next system configuration, we expect to be able to access close to nine processors of the C90 concurrently with a commensurate improvement in real-time efficiency. These improvements in performance are comparable to those expected from a massively parallel implementation of the same calculations on the Intel Paragon

  3. Multi-CPU plasma fluid turbulence calculations on a CRAY Y-MP C90

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Leboeuf, J.N.; Curtis, B.C.; Troutman, R.L.

    1993-01-01

    Significant improvements in real-time efficiency have been obtained for plasma fluid turbulence calculations by microtasking the nonlinear fluid code KITE in which they are implemented on the CRAY Y-MP C90 at the National Energy Research Supercomputer Center (NERSC). The number of processors accessed concurrently scales linearly with problem size. Close to six concurrent processors have so far been obtained with a three-dimensional nonlinear production calculation at the currently allowed memory size of 80 Mword. With a calculation size corresponding to the maximum allowed memory of 200 Mword in the next system configuration, they expect to be able to access close to ten processors of the C90 concurrently with a commensurate improvement in real-time efficiency. These improvements in performance are comparable to those expected from a massively parallel implementation of the same calculations on the Intel Paragon

  4. Method for calculating annual energy efficiency improvement of TV sets

    International Nuclear Information System (INIS)

    Varman, M.; Mahlia, T.M.I.; Masjuki, H.H.

    2006-01-01

    The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD are poised to have a large impact on overall TV electricity consumption in the Malaysia. Following this increased consumption, energy efficiency standard present a highly effective measure for decreasing electricity consumption in the residential sector. The main problem in setting energy efficiency standard is identifying annual efficiency improvement, due to the lack of time series statistical data available in developing countries. This study attempts to present a method of calculating annual energy efficiency improvement for TV set, which can be used for implementing energy efficiency standard for TV sets in Malaysia and other developing countries. Although the presented result is only an approximation, definitely it is one of the ways of accomplishing energy standard. Furthermore, the method can be used for other appliances without any major modification

  5. Method for calculating annual energy efficiency improvement of TV sets

    Energy Technology Data Exchange (ETDEWEB)

    Varman, M. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia); Mahlia, T.M.I. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia)]. E-mail: indra@um.edu.my; Masjuki, H.H. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia)

    2006-10-15

    The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD are poised to have a large impact on overall TV electricity consumption in the Malaysia. Following this increased consumption, energy efficiency standard present a highly effective measure for decreasing electricity consumption in the residential sector. The main problem in setting energy efficiency standard is identifying annual efficiency improvement, due to the lack of time series statistical data available in developing countries. This study attempts to present a method of calculating annual energy efficiency improvement for TV set, which can be used for implementing energy efficiency standard for TV sets in Malaysia and other developing countries. Although the presented result is only an approximation, definitely it is one of the ways of accomplishing energy standard. Furthermore, the method can be used for other appliances without any major modification.

  6. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    International Nuclear Information System (INIS)

    Chamberlain, S; French, S; Nazareth, D

    2016-01-01

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3 × 10"6 to 3 × 10"7); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10"6 was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10"6 have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.

  7. 78 FR 29239 - Final Priority; Technical Assistance To Improve State Data Capacity-National Technical Assistance...

    Science.gov (United States)

    2013-05-20

    ... Assistance To Improve State Data Capacity--National Technical Assistance Center To Improve State Capacity To... Education and Rehabilitative Services announces a priority under the Technical Assistance to Improve State... (FY) 2013 and later years. We take this action to focus attention on an identified national need to...

  8. Improved method for calculation of population doses from nuclear complexes over large geographical areas

    International Nuclear Information System (INIS)

    Corley, J.P.; Baker, D.A.; Hill, E.R.; Wendell, L.L.

    1977-09-01

    To simplify the calculation of potential long-distance environmental impacts, an overall average population exposure coefficient (P.E.C.) for the entire contiguous United States was calculated for releases to the atmosphere from Hanford facilities. The method, requiring machine computation, combines Bureau of Census population data by census enumeration district and an annual average atmospheric dilution factor (anti chi/Q') derived from 12-hourly gridded wind analyses provided by the NOAA's National Meteorological Center. A variable-trajectory puff-advection model was used to calculate an hourly anti chi/Q' for each grid square, assuming uniform hourly releases; seasonal and annual averages were then calculated. For Hanford, using 1970 census data, a P.E.C. of 2 x 10 -3 man-seconds per cubic meter was calculated. The P.E.C. is useful for both radioactive and nonradioactive releases. To calculate population doses for the entire contiguous United States, the P.E.C. is multiplied by the annual average release rate and then by the dose factor (rem/yr per Ci/m 3 ) for each radionuclide, and the dose contribution in man-rem is summed for all radionuclides. For multiple pathways, the P.E.C. is still useful, provided that doses from a unit release can be obtained from a set of atmospheric dose factors. The methodology is applicable to any point source, any set of population data by map grid coordinates, and any geographical area covered by equivalent meteorological data

  9. CO2 calculator

    DEFF Research Database (Denmark)

    Nielsen, Claus Werner; Nielsen, Ole-Kenneth

    2009-01-01

    Many countries are in the process of mapping their national CO2 emissions, but only few have managed to produce an overall report at municipal level yet. Denmark, however, has succeeded in such a project. Using a new national IT-based calculation model, municipalities can calculate the extent...

  10. 77 FR 59888 - General Conference Committee of the National Poultry Improvement Plan

    Science.gov (United States)

    2012-10-01

    ... Improvement Plan AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice of intent to renew... the General Conference Committee of the National Poultry Improvement Plan (Committee) for a 2year... Improvement Plan, VS, APHIS, USDA, 1506 Klondike Road, Suite 300, Conyers, GA 30094; (770) 922-3496...

  11. Proposed Casey's Pond Improvement Project, Fermi National Accelerator Laboratory

    International Nuclear Information System (INIS)

    1995-05-01

    The U.S. Department of Energy (DOE) has prepared an Environmental Assessment (EA), evaluating the impacts associated with the proposed Casey's Pond Improvement Project at the Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois. The improvement project would maximize the efficiency of the Fermilab Industrial Cooling Water (ICW) distribution system, which removes (via evaporation) the thermal load from experimental and other support equipment supporting the high energy physics program at Fermilab. The project would eliminate the risk of overheating during fixed target experiments, ensure that the Illinois Water Quality Standards are consistently achieved and provide needed additional water storage for fire protection. Based on the analysis in the EA, the DOE has determined that the proposed action does not constitute a major Federal action significantly affecting the quality of the human environment, within the meaning of the National Environmental Policy Act (NEPA) of 1969. Therefore, the preparation of an Environmental Impact Statement is not required

  12. Calculation of low-cycle fatigue in accordance with the national standard and strength codes

    Science.gov (United States)

    Kontorovich, T. S.; Radin, Yu. A.

    2017-08-01

    Over the most recent 15 years, the Russian power industry has largely relied on imported equipment manufactured in compliance with foreign standards and procedures. This inevitably necessitates their harmonization with the regulatory documents of the Russian Federation, which include calculations of strength, low cycle fatigue, and assessment of the equipment service life. An important regulatory document providing the engineering foundation for cyclic strength and life assessment for high-load components of the boiler and steamline of a water/steam circuit is RD 10-249-98:2000: Standard Method of Strength Estimation in Stationary Boilers and Steam and Water Piping. In January 2015, the National Standard of the Russian Federation 12952-3:2001 was introduced regulating the issues of design and calculation of the pressure parts of water-tube boilers and auxiliary installations. Thus, there appeared to be two documents simultaneously valid in the same energy field and using different methods for calculating the low-cycle fatigue strength, which leads to different results. In this connection, the current situation can lead to incorrect ideas about the cyclic strength and the service life of high-temperature boiler parts. The article shows that the results of calculations performed in accordance with GOST R 55682.3-2013/EN 12952-3: 2001 are less conservative than the results of the standard RD 10-249-98. Since the calculation of the expected service life of boiler parts should use GOST R 55682.3-2013/EN 12952-3: 2001, it becomes necessary to establish the applicability scope of each of the above documents.

  13. National Institute of Justice (NIJ): improving the effectiveness of law enforcement via homeland security technology improvements (Keynote Address)

    Science.gov (United States)

    Morgan, John S.

    2005-05-01

    Law enforcement agencies play a key role in protecting the nation from and responding to terrorist attacks. Preventing terrorism and promoting the nation"s security is the Department of Justice"s number one strategic priority. This is reflected in its technology development efforts, as well as its operational focus. The National Institute of Justice (NIJ) is the national focal point for the research, development, test and evaluation of technology for law enforcement. In addition to its responsibilities in supporting day-to-day criminal justice needs in areas such as less lethal weapons and forensic science, NIJ also provides critical support for counter-terrorism capacity improvements in state and local law enforcement in several areas. The most important of these areas are bomb response, concealed weapons detection, communications and information technology, which together offer the greatest potential benefit with respect to improving the ability to law enforcement agencies to respond to all types of crime including terrorist acts. NIJ coordinates its activities with several other key federal partners, including the Department of Homeland Security"s Science and Technology Directorate, the Technical Support Working Group, and the Department of Defense.

  14. Improved simplified scheme of atom equivalents to calculate enthalpies of formation of alkyl radicals

    International Nuclear Information System (INIS)

    Castro, Eduardo A.

    2002-01-01

    An improved simplified method of atom equivalents is applied to the calculation of enthalpies of formation of several alkyl radicals. Some statistical mechanics and thermodynamic corrections are added to compare theoretical values with available experimental data. The estimation is quite satisfactory and the average error is similar to current experimental uncertainties, thus providing a direct and simple procedure for this sort of calculation when experimental results are unavailable or/and as an independent check when experimental data are in doubt. (Author) [es

  15. Improved SVR Model for Multi-Layer Buildup Factor Calculation

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2006-01-01

    The accuracy of point kernel method applied in gamma ray dose rate calculations in shielding design and radiation safety analysis is limited by the accuracy of buildup factors used in calculations. Although buildup factors for single-layer shields are well defined and understood, buildup factors for stratified shields represent a complex physical problem that is hard to express in mathematical terms. The traditional approach for expressing buildup factors of multi-layer shields is through semi-empirical formulas obtained by fitting the results of transport theory or Monte Carlo calculations. Such an approach requires an ad-hoc definition of the fitting function and often results with numerous and usually inadequately explained and defined correction factors added to the final empirical formula. Even more, finally obtained formulas are generally limited to a small number of predefined combinations of materials within relatively small range of gamma ray energies and shield thicknesses. Recently, a new approach has been suggested by the authors involving one of machine learning techniques called Support Vector Machines, i.e., Support Vector Regression (SVR). Preliminary investigations performed for double-layer shields revealed great potential of the method, but also pointed out some drawbacks of the developed model, mostly related to the selection of one of the parameters describing the problem (material atomic number), and the method in which the model was designed to evolve during the learning process. It is the aim of this paper to introduce a new parameter (single material buildup factor) that is to replace the existing material atomic number as an input parameter. The comparison of two models generated by different input parameters has been performed. The second goal is to improve the evolution process of learning, i.e., the experimental computational procedure that provides a framework for automated construction of complex regression models of predefined

  16. Applying national survey results for strategic planning and program improvement: the National Diabetes Education Program.

    Science.gov (United States)

    Griffey, Susan; Piccinino, Linda; Gallivan, Joanne; Lotenberg, Lynne Doner; Tuncer, Diane

    2015-02-01

    Since the 1970s, the federal government has spearheaded major national education programs to reduce the burden of chronic diseases in the United States. These prevention and disease management programs communicate critical information to the public, those affected by the disease, and health care providers. The National Diabetes Education Program (NDEP), the leading federal program on diabetes sponsored by the National Institutes of Health (NIH) and the Centers for Disease Control and Prevention (CDC), uses primary and secondary quantitative data and qualitative audience research to guide program planning and evaluation. Since 2006, the NDEP has filled the gaps in existing quantitative data sources by conducting its own population-based survey, the NDEP National Diabetes Survey (NNDS). The NNDS is conducted every 2–3 years and tracks changes in knowledge, attitudes and practice indicators in key target audiences. This article describes how the NDEP has used the NNDS as a key component of its evaluation framework and how it applies the survey results for strategic planning and program improvement. The NDEP's use of the NNDS illustrates how a program evaluation framework that includes periodic population-based surveys can serve as an evaluation model for similar national health education programs.

  17. Complex method to calculate objective assessments of information systems protection to improve expert assessments reliability

    Science.gov (United States)

    Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.

    2018-01-01

    The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.

  18. Development of a model to calculate the economic implications of improving the indoor climate

    DEFF Research Database (Denmark)

    Jensen, Kasper Lynge

    on performance. The Bayesian Network uses a probabilistic approach by which a probability distribution can take this variation of the different indoor variables into account. The result from total building economy calculations indicated that depending on the indoor environmental change (improvement...

  19. Comparative analysis of the value of national brands

    Directory of Open Access Journals (Sweden)

    Jelena Žugić

    2018-01-01

    Full Text Available Nation branding is not the “holy grail” of economic development, but it can provide a distinct advantage when it is aligned with a well-defined economic strategy and supported by public policy. A nation brand is the sum of people’s perceptions of a country across the most important areas of national competence. This paper examines the value of the nation brand on a sample of 108 countries, using the Anholt Nation Brands Index and using the mathematical formula for calculating the surface of Anholt’s hexagon for each country individually. In this paper, parameters are taken from six areas of the nation hexagon, from the World Bank and the UNESCO database. The surface of the nation hexagon was calculated with mathematical tools and comparative analysis was done between nation brands. By using strategic nation branding models designed by other branding experts in combination with a proposed mathematical model that shows the advantages and disadvantages of the nation brand of each country (and within the country, their competitiveness on the global stage is expected to improve.

  20. Calculations of Neutral Beam Ion Confinement for the National Spherical Torus Experiment

    International Nuclear Information System (INIS)

    Redi, M.H.; Darrow, D.S.; Egedal, J.; Kaye, S.M.; White, R.B.

    2002-01-01

    The spherical torus (ST) concept underlies several contemporary plasma physics experiments, in which relatively low magnetic fields, high plasma edge q, and low aspect ratio combine for potentially compact, high beta and high performance fusion reactors. An important issue for the ST is the calculation of energetic ion confinement, as large Larmor radius makes conventional guiding center codes of limited usefulness and efficient plasma heating by RF and neutral beam ion technology requires minimal fast ion losses. The National Spherical Torus Experiment (NSTX) is a medium-sized, low aspect ratio ST, with R=0.85 m, a=0.67 m, R/a=1.26, Ip*1.4 MA, Bt*0.6 T, 5 MW of neutral beam heating and 6 MW of RF heating. 80 keV neutral beam ions at tangency radii of 0.5, 0.6 and 0.7 m are routinely used to achieve plasma betas above 30%. Transport analyses for experiments on NSTX often exhibit a puzzling ion power balance. It will be necessary to have reliable beam ion calculations to distinguish among the source and loss channels, and to explore the possibilities for new physics phenomena, such as the recently proposed compressional Alfven eigenmode ion heating

  1. BetaShape: A new code for improved analytical calculations of beta spectra

    Directory of Open Access Journals (Sweden)

    Mougeot Xavier

    2017-01-01

    Full Text Available The new code BetaShape has been developed in order to improve the nuclear data related to beta decays. An analytical model was considered, except for the relativistic electron wave functions, for ensuring fast calculations. Output quantities are mean energies, log ft values and beta and neutrino spectra for single and multiple transitions. The uncertainties from the input parameters, read from an ENSDF file, are propagated. A database of experimental shape factors is included. A comparison over the entire ENSDF database with the standard code currently used in nuclear data evaluations shows consistent results for the vast majority of the transitions and highlights the improvements that can be expected with the use of BetaShape.

  2. National nutrition surveys in Asian countries: surveillance and monitoring efforts to improve global health.

    Science.gov (United States)

    Song, SuJin; Song, Won O

    2014-01-01

    Asian regions have been suffering from growing double burden of nutritional health problems, such as undernutrition and chronic diseases. National nutrition survey plays an essential role in helping to improve both national and global health and reduce health disparities. The aim of this review was to compile and present the information on current national nutrition surveys conducted in Asian countries and suggest relevant issues in implementation of national nutrition surveys. Fifteen countries in Asia have conducted national nutrition surveys to collect data on nutrition and health status of the population. The information on national nutrition survey of each country was obtained from government documents, international organizations, survey website of governmental agencies, and publications, including journal articles, books, reports, and brochures. The national nutrition survey of each country has different variables and procedures. Variables of the surveys include sociodemographic and lifestyle variables; foods and beverages intake, dietary habits, and food security of individual or household; and health indicators, such as anthropometric and biochemical variables. The surveys have focused on collecting data about nutritional health status in children aged under five years and women of reproductive ages, nutrition intake adequacy and prevalence of obesity and chronic diseases for all individuals. To measure nutrition and health status of Asian populations accurately, improvement of current dietary assessment methods with various diet evaluation tools is necessary. The information organized in this review is important for researchers, policy makers, public health program developers, educators, and consumers in improving national and global health.

  3. Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package.

    Science.gov (United States)

    Kaus, Joseph W; Pierce, Levi T; Walker, Ross C; McCammont, J Andrew

    2013-09-10

    Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license.

  4. A comparative study of the systems for neutronics calculations used in Los Alamos Scientific Laboratory (LASL) and Argonne National Laboratory (ANL)

    International Nuclear Information System (INIS)

    Amorim, E.S. do; D'Oliveira, A.B.; Oliveira, E.C. de.

    1980-11-01

    A comparative study of the systems for neutronics calculations used in Los Alamos Scientific Laboratory (LASL) and Argonne National Laboratory (ANL) has been performed using benchmark results available in the literature, in order to analyse tghe convenience of using the respective codes MINX/NJOY and ETOE/MC 2 -2 for performing neutronics calculations in course at the Divisao de Estudos Avancados. (Author) [pt

  5. Mobile application-based Seoul National University Prostate Cancer Risk Calculator: development, validation, and comparative analysis with two Western risk calculators in Korean men.

    Directory of Open Access Journals (Sweden)

    Chang Wook Jeong

    Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy

  6. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    Science.gov (United States)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  7. Improving method for calculating integral index of personnel security of company

    Directory of Open Access Journals (Sweden)

    Chjan Khao Yui

    2016-06-01

    Full Text Available The paper improves the method of calculating the integral index of personnel security of a company. The author has identified four components of personnel security (social and motivational safety, occupational safety, not confliction security, life safety which are characterized by certain indicators. Integral index of personnel security is designed for the enterprises of machine-building sector in Kharkov region, taking into account theweight coefficients j-th component of bj, and weighting factors that determine the degree of contribution of the ith parameter in the integral index aіj as defined by experts.

  8. 76 FR 22295 - National Poultry Improvement Plan and Auxiliary Provisions

    Science.gov (United States)

    2011-04-21

    ... DEPARTMENT OF AGRICULTURE Animal and Plant Health Inspection 9 CFR Part 145 [Docket No. APHIS-2009-0031] RIN 0579-AD21 National Poultry Improvement Plan and Auxiliary Provisions Correction In rule document 2011-6539 appearing on pages 15791-15798 in the issue of Tuesday, March 22, 2011, make the...

  9. Improved adiabatic calculation of muonic-hydrogen-atom cross sections. I. Isotopic exchange and elastic scattering in asymmetric collisions

    International Nuclear Information System (INIS)

    Cohen, J.S.; Struensee, M.C.

    1991-01-01

    The improved adiabatic representation is used in calculations of elastic and isotopic-exchange cross sections for asymmetric collisions of pμ, dμ, and tμ with bare p, d, and t nuclei and with H, D, and T atoms. This formulation dissociates properly, correcting a well-known deficiency of the standard adiabatic method for muonic-atom collisions, and includes some effects at zeroth order that are normally considered nonadiabatic. The electronic screening is calculated directly and precisely within the improved adiabatic description; it is found to be about 30% smaller in magnitude than the previously used value at large internuclear distances and to deviate considerably from the asymptotic form at small distances. The reactance matrices, needed for calculations of molecular-target effects, are given in tables

  10. An improved method of inverse kinematics calculation for a six-link manipulator

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-07-01

    As one method of solving the inverse problem related to a six-link manipulator, an improvement was made of previously proposed calculation algorithm based on a solution of an algebraic equation of the 24-th order. In this paper, the same type of a polynomial was derived in the form of the equation of 16-th order, i.e., the order reduced by 8, as compared to previous algorithm. The accuracy of solutions was identified to be much refined. (author)

  11. Turning Schools Around: The National Board Certification Process as a School Improvement Strategy

    Science.gov (United States)

    Jaquith, Ann; Snyder, Jon

    2016-01-01

    Can the National Board certification process support school improvement where large proportions of students score below grade level on standardized tests? This SCOPE study examines a project that sought to seize and capitalize upon the learning opportunities embedded in the National Board certification process, particularly opportunities to learn…

  12. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  13. Edutourism Taka Bonerate National Park through Scientific Approach to Improve Student Learning Outcomes

    Science.gov (United States)

    Hayati, R. S.

    2017-02-01

    This research aim is develop the potential of Taka Bonerate National Park as learning resources through edutourism with scientific approach to improve student learning outcomes. Focus of student learning outcomes are students psychomotor abilities and comprehension on Biodiversity of Marine Biota, Corals Ecosystem, and Conservation topics. The edutourism development products are teacher manual, edutourism worksheet, material booklet, guide’s manual, and Taka Bonerate National Park governor manual. The method to develop edutourism products is ADDIE research and development model that consist of analysis, design, development and production, implementation, and evaluation step. The subjects in the implementation step were given a pretest and posttest and observation sheet to see the effect of edutourism Taka Bonerate National Park through scientific approach to student learning outcomes on Biodiversity of Marine Biota, Corals Ecosystem, and Conservation topics. The data were analyzed qualitative descriptively. The research result is edutourism Taka Bonerate National Park through scientific approach can improve students learning outcomes on Biodiversity of Marine Biota, Corals Ecosystem, and Conservation topics. Edutourism Taka Bonerate National Park can be an alternative of learning method on Biodiversity of Marine Biota, Corals Ecosystem, and Conservation topics.

  14. Improvement of low energy atmospheric neutrino flux calculation using the JAM nuclear interaction model

    International Nuclear Information System (INIS)

    Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.

    2011-01-01

    We present the calculation of the atmospheric neutrino fluxes with an interaction model named JAM, which is used in PHITS (Particle and Heavy-Ion Transport code System) [K. Niita et al., Radiation Measurements 41, 1080 (2006).]. The JAM interaction model agrees with the HARP experiment [H. Collaboration, Astropart. Phys. 30, 124 (2008).] a little better than DPMJET-III[S. Roesler, R. Engel, and J. Ranft, arXiv:hep-ph/0012252.]. After some modifications, it reproduces the muon flux below 1 GeV/c at balloon altitudes better than the modified DPMJET-III, which we used for the calculation of atmospheric neutrino flux in previous works [T. Sanuki, M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 75, 043005 (2007).][M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, and T. Sanuki, Phys. Rev. D 75, 043006 (2007).]. Some improvements in the calculation of atmospheric neutrino flux are also reported.

  15. Input/Output of ab-initio nuclear structure calculations for improved performance and portability

    International Nuclear Information System (INIS)

    Laghave, Nikhil

    2010-01-01

    Many modern scientific applications rely on highly computation intensive calculations. However, most applications do not concentrate as much on the role that input/output operations can play for improved performance and portability. Parallelizing input/output operations of large files can significantly improve the performance of parallel applications where sequential I/O is a bottleneck. A proper choice of I/O library also offers a scope for making input/output operations portable across different architectures. Thus, use of parallel I/O libraries for organizing I/O of large data files offers great scope in improving performance and portability of applications. In particular, sequential I/O has been identified as a bottleneck for the highly scalable MFDn (Many Fermion Dynamics for nuclear structure) code performing ab-initio nuclear structure calculations. We develop interfaces and parallel I/O procedures to use a well-known parallel I/O library in MFDn. As a result, we gain efficient I/O of large datasets along with their portability and ease of use in the down-stream processing. Even situations where the amount of data to be written is not huge, proper use of input/output operations can boost the performance of scientific applications. Application checkpointing offers enormous performance improvement and flexibility by doing a negligible amount of I/O to disk. Checkpointing saves and resumes application state in such a manner that in most cases the application is unaware that there has been an interruption to its execution. This helps in saving large amount of work that has been previously done and continue application execution. This small amount of I/O provides substantial time saving by offering restart/resume capability to applications. The need for checkpointing in optimization code NEWUOA has been identified and checkpoint/restart capability has been implemented in NEWUOA by using simple file I/O.

  16. The U.S. National Action Plan to Improve Health Literacy: A Model for Positive Organizational Change.

    Science.gov (United States)

    Baur, Cynthia; Harris, Linda; Squire, Elizabeth

    2017-01-01

    This chapter presents the U.S. National Action Plan to Improve Health Literacy and its unique contribution to public health and health care in the U.S. The chapter details what the National Action Plan is, how it evolved, and how it has influenced priorities for health literacy improvement work. Examples of how the National Action Plan fills policy and research gaps in health care and public health are included. The first part of the chapter lays the foundation for the development of the National Action Plan, and the second part discusses how it can stimulate positive organizational change to help create health literate organizations and move the nation towards a health literate society.

  17. Improvement and test calculation on basic code or sodium-water reaction jet

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Yoshinori; Itooka, Satoshi [Advanced Reactor Engineering Center, Hitachi Works, Hitachi Ltd., Hitachi, Ibaraki (Japan); Okabe, Ayao; Fujimata, Kazuhiro; Sakurai, Tomoo [Consulting Engineering Dept., Hitachi Engineering Co., Ltd., Hitachi, Ibaraki (Japan)

    1999-03-01

    In selecting the reasonable DBL (design basis water leak rate) on steam generator (SG), it is necessary to improve analytical method for estimating the sodium temperature on failure propagation due to overheating. Improvement on the basic code for sodium-water reaction (SWR) jet was performed for an actual scale SG. The improvement points of the code are as follows; (1) introduction of advanced model such as heat transfer between the jet and structure (tube array), cooling effect of the structure, heat transfer between analytic cells, and (2) model improvement for heat transfer between two-phase flow and porous-media. The test calculation using the improved code (LEAP-JET ver.1.30) were carried out with conditions of the SWAT-3{center_dot}Run-19 test and an actual scale SG. It is confirmed that the SWR jet behavior on the results is reasonable and Influence to analysis result of a model. Code integration with the blow down analytic code (LEAP-BLOW) was also studied. It is suitable that LEAP-JET was improved as one of the LEAP-BLOW's models, and it was integrated into this. In addition to above, the improvement for setting of boundary condition and the development of the interface program to transfer the analytical results of LEAP-BLOW have been performed in order to consider the cooling effect of coolant in the tube simply. However, verification of the code by new SWAT-1 and SWAT-3 test data planned in future is necessary because LEAP-JET is under development. And furthermore advancement needs to be planned. (author)

  18. Improvement and test calculation on basic code or sodium-water reaction jet

    International Nuclear Information System (INIS)

    Saito, Yoshinori; Itooka, Satoshi; Okabe, Ayao; Fujimata, Kazuhiro; Sakurai, Tomoo

    1999-03-01

    In selecting the reasonable DBL (design basis water leak rate) on steam generator (SG), it is necessary to improve analytical method for estimating the sodium temperature on failure propagation due to overheating. Improvement on the basic code for sodium-water reaction (SWR) jet was performed for an actual scale SG. The improvement points of the code are as follows; (1) introduction of advanced model such as heat transfer between the jet and structure (tube array), cooling effect of the structure, heat transfer between analytic cells, and (2) model improvement for heat transfer between two-phase flow and porous-media. The test calculation using the improved code (LEAP-JET ver.1.30) were carried out with conditions of the SWAT-3·Run-19 test and an actual scale SG. It is confirmed that the SWR jet behavior on the results is reasonable and Influence to analysis result of a model. Code integration with the blow down analytic code (LEAP-BLOW) was also studied. It is suitable that LEAP-JET was improved as one of the LEAP-BLOW's models, and it was integrated into this. In addition to above, the improvement for setting of boundary condition and the development of the interface program to transfer the analytical results of LEAP-BLOW have been performed in order to consider the cooling effect of coolant in the tube simply. However, verification of the code by new SWAT-1 and SWAT-3 test data planned in future is necessary because LEAP-JET is under development. And furthermore advancement needs to be planned. (author)

  19. Output calculation of electron therapy at extended SSD using an improved LBR method

    Energy Technology Data Exchange (ETDEWEB)

    Alkhatib, Hassaan A.; Gebreamlak, Wondesen T., E-mail: wondtassew@gmail.com; Wright, Ben W.; Neglia, William J. [South Carolina Oncology Associates, Columbia, South Carolina 29210 (United States); Tedeschi, David J. [Department of Physics and Astronomy, University of South Carolina, Columbia, South Carolina 29208 (United States); Mihailidis, Dimitris [CAMC Cancer Center and Alliance Oncology, Charleston, West Virginia 25304 (United States); Sobash, Philip T. [The Medical University of South Carolina, Charleston, South Carolina 29425 (United States); Fontenot, Jonas D. [Department of Physics, Mary Bird Perkins Cancer Center, Baton Rouge, Louisiana 70809 (United States)

    2015-02-15

    Purpose: To calculate the output factor (OPF) of any irregularly shaped electron beam at extended SSD. Methods: Circular cutouts were prepared from 2.0 cm diameter to the maximum possible size for 15 × 15 applicator cone. In addition, two irregular cutouts were prepared. For each cutout, percentage depth dose (PDD) at the standard SSD and doses at different SSD values were measured using 6, 9, 12, and 16 MeV electron beam energies on a Varian 2100C LINAC and the distance at which the central axis electron fluence becomes independent of cutout size was determined. The measurements were repeated with an ELEKTA Synergy LINAC using 14 × 14 applicator cone and electron beam energies of 6, 9, 12, and 15 MeV. The PDD measurements were performed using a scanning system and two diodes—one for the signal and the other a stationary reference outside the tank. The doses of the circular cutouts at different SSDs were measured using PTW 0.125 cm{sup 3} Semiflex ion-chamber and EDR2 films. The electron fluence was measured using EDR2 films. Results: For each circular cutout, the lateral buildup ratio (LBR) was calculated from the measured PDD curve using the open applicator cone as the reference field. The effective SSD (SSD{sub eff}) of each circular cutout was calculated from the measured doses at different SSD values. Using the LBR value and the radius of the circular cutout, the corresponding lateral spread parameter [σ{sub R}(z)] was calculated. Taking the cutout size dependence of σ{sub R}(z) into account, the PDD curves of the irregularly shaped cutouts at the standard SSD were calculated. Using the calculated PDD curve of the irregularly shaped cutout along with the LBR and SSD{sub eff} values of the circular cutouts, the output factor of the irregularly shaped cutout at extended SSD was calculated. Finally, both the calculated PDD curves and output factor values were compared with the measured values. Conclusions: The improved LBR method has been generalized to

  20. Improved techniques for outgoing wave variational principle calculations of converged state-to-state transition probabilities for chemical reactions

    Science.gov (United States)

    Mielke, Steven L.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    Improved techniques and well-optimized basis sets are presented for application of the outgoing wave variational principle to calculate converged quantum mechanical reaction probabilities. They are illustrated with calculations for the reactions D + H2 yields HD + H with total angular momentum J = 3 and F + H2 yields HF + H with J = 0 and 3. The optimization involves the choice of distortion potential, the grid for calculating half-integrated Green's functions, the placement, width, and number of primitive distributed Gaussians, and the computationally most efficient partition between dynamically adapted and primitive basis functions. Benchmark calculations with 224-1064 channels are presented.

  1. Quality Assurance and Improvement in Head and Neck Cancer Surgery: From Clinical Trials to National Healthcare Initiatives.

    Science.gov (United States)

    Simon, Christian; Caballero, Carmela

    2018-05-24

    It is without question in the best interest of our patients, if we can identify ways to improve the quality of care we deliver to them. Great progress has been made within the last 25 years in terms of development and implementation of quality-assurance (QA) platforms and quality improvement programs for surgery in general, and within this context for head and neck surgery. As of now, we have successfully identified process indicators that impact outcome of our patients and the quality of care we deliver as surgeons. We have developed risk calculators to determine the risk for complications of individual surgical patients. We have created perioperative guidelines for complex head and neck procedures. We have in Europe and North America created audit registries that can gather and analyze data from institutions across the world to better understand which processes need change to obtain good outcomes and improve quality of care. QA platforms can be tested within the clearly defined environment of prospective clinical trials. If positive, such programs could be rolled out within national healthcare systems, if feasible. Testing quality programs in clinical trials could be a versatile tool to help head neck cancer patients benefit directly from such initiatives on a global level.

  2. 78 FR 33799 - National Poultry Improvement Plan; General Conference Committee Meeting

    Science.gov (United States)

    2013-06-05

    ... Washington, DC, this 3rd day of June 2013. Kevin Shea, Acting Administrator, Animal and Plant Health... DEPARTMENT OF AGRICULTURE Animal and Plant Health Inspection Service [Docket No. APHIS-2013-0032] National Poultry Improvement Plan; General Conference Committee Meeting AGENCY: Animal and Plant Health...

  3. 42 CFR 484.220 - Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and...

    Science.gov (United States)

    2010-10-01

    ... address changes to the case-mix that are a result of changes in the coding or classification of different...-day episode payment rate for case-mix and area wage levels. 484.220 Section 484.220 Public Health... Calculation of the adjusted national prospective 60-day episode payment rate for case-mix and area wage levels...

  4. Improvements to science operations at Kitt Peak National Observatory

    Science.gov (United States)

    Bohannan, Bruce

    1998-07-01

    In recent years Kitt Peak National Observatory has undertaken a number of innovative projects to optimize science operations with the suite of telescopes we operate on Kitt Peak, Arizona. Changing scientific requirements and expectations of our users, evolving technology and declining budgets have motivated the changes. The operations improvements have included telescope performance enhancements--with the focus on the Mayall 4-m--modes of observing and scheduling, telescope control and observing systems, planning and communication, and data archiving.

  5. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  6. Improved density functional calculations for atoms, molecules and surfaces

    International Nuclear Information System (INIS)

    Fricke, B.; Anton, J.; Fritzsche, S.; Sarpe-Tudoran, C.

    2005-01-01

    The non-collinear and collinear descriptions within relativistic density functional theory is described. We present results of both non-collinear and collinear calculations for atoms, diatomic molecules, and some surface simulations. We find that the accuracy of our density functional calculations for the smaller systems is comparable to good quantum chemical calculations, and thus this method provides a sound basis for larger systems where no such comparison is possible. (author)

  7. Advances in public health accreditation readiness and quality improvement: evaluation findings from the National Public Health Improvement Initiative.

    Science.gov (United States)

    McLees, Anita W; Thomas, Craig W; Nawaz, Saira; Young, Andrea C; Rider, Nikki; Davis, Mary

    2014-01-01

    Continuous quality improvement is a central tenet of the Public Health Accreditation Board's (PHAB) national voluntary public health accreditation program. Similarly, the Centers for Disease Control and Prevention launched the National Public Health Improvement Initiative (NPHII) in 2010 with the goal of advancing accreditation readiness, performance management, and quality improvement (QI). Evaluate the extent to which NPHII awardees have achieved program goals. NPHII awardees responded to an annual assessment and program monitoring data requests. Analysis included simple descriptive statistics. Seventy-four state, tribal, local, and territorial public health agencies receiving NPHII funds. NPHII performance improvement managers or principal investigators. Development of accreditation prerequisites, completion of an organizational self-assessment against the PHAB Standards and Measures, Version 1.0, establishment of a performance management system, and implementation of QI initiatives to increase efficiency and effectiveness. Of the 73 responding NPHII awardees, 42.5% had a current health assessment, 26% had a current health improvement plan, and 48% had a current strategic plan in place at the end of the second program year. Approximately 26% of awardees had completed an organizational PHAB self-assessment, 72% had established at least 1 of the 4 components of a performance management system, and 90% had conducted QI activities focused on increasing efficiencies and/or effectiveness. NPHII appears to be supporting awardees' initial achievement of program outcomes. As NPHII enters its third year, there will be additional opportunities to advance the work of NPHII, compile and disseminate results, and inform a vision of high-quality public health necessary to improve the health of the population.

  8. Improvements on the calculation of the epithermal disadvantage factor for thermal nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Aboustta, Mohamed A.; Martinez, Aquilino S. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1997-12-01

    The disadvantage factor takes into account the neutron flux variation through the fuel cell. In the fuel the flux is depressed in relation to its level in the moderator region. In order to avoid detailed calculations for each different set of cell dimensions, which turns out necessary the development of problem-dependent neutron cross section libraries, a disadvantage factor based on a two-region equivalence theory was proposed for the EPRI-CELL code. However, it uses a rational approximation to the neutron escape probability to describe the neutron transport between cell regions. Such approximation allows the use of the equivalence principals but introduces a non negligible error which results in an underestimation of the cell neutron fluxes. A new proposed treatment, that will be presented in this work, remarkably improves the numerical calculation and reduces the error of the above mentioned method. (author). 4 refs., 2 figs.

  9. Improvements on the calculation of the epithermal disadvantage factor for thermal nuclear reactors

    International Nuclear Information System (INIS)

    Aboustta, Mohamed A.; Martinez, Aquilino S.

    1997-01-01

    The disadvantage factor takes into account the neutron flux variation through the fuel cell. In the fuel the flux is depressed in relation to its level in the moderator region. In order to avoid detailed calculations for each different set of cell dimensions, which turns out necessary the development of problem-dependent neutron cross section libraries, a disadvantage factor based on a two-region equivalence theory was proposed for the EPRI-CELL code. However, it uses a rational approximation to the neutron escape probability to describe the neutron transport between cell regions. Such approximation allows the use of the equivalence principals but introduces a non negligible error which results in an underestimation of the cell neutron fluxes. A new proposed treatment, that will be presented in this work, remarkably improves the numerical calculation and reduces the error of the above mentioned method. (author). 4 refs., 2 figs

  10. An improved method for calculating force distributions in moment-stiff timber connections

    DEFF Research Database (Denmark)

    Ormarsson, Sigurdur; Blond, Mette

    2012-01-01

    An improved method for calculating force distributions in moment-stiff metal dowel-type timber connections is presented, a method based on use of three-dimensional finite element simulations of timber connections subjected to moment action. The study that was carried out aimed at determining how...... the slip modulus varies with the angle between the direction of the dowel forces and the fibres in question, as well as how the orthotropic stiffness behaviour of the wood material affects the direction and the size of the forces. It was assumed that the force distribution generated by the moment action...

  11. JULIA: calculation projection software for primary barriers shielding to X-Rays using barite

    International Nuclear Information System (INIS)

    Silva, Júlia R.A.S. da; Vieira, José W.; Lima, Fernando R. A.

    2017-01-01

    The objective was to program a software to calculate the required thicknesses to attenuate X-rays in kilovoltage of 60 kV, 80 kV, 110 kV and 150 kV. The conventional methodological parameters for structural shield calculations established by the NCRP (National Council on Radiation Protection and Measurements) were presented. The descriptive and exploratory methods allowed the construction of the JULIA. In this sense and based on the result obtained, the tool presented is useful for professionals who wish to design structural shielding in radiodiagnostic and/or therapy. The development of calculations in the computational tool corresponds to the accessibility, optimization of time and estimation close to the real. Such heuristic exercise represents improvement of calculations for the estimation of primary barriers with barite

  12. Improvement of Power Flow Calculation with Optimization Factor Based on Current Injection Method

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2014-01-01

    Full Text Available This paper presents an improvement in power flow calculation based on current injection method by introducing optimization factor. In the method proposed by this paper, the PQ buses are represented by current mismatches while the PV buses are represented by power mismatches. It is different from the representations in conventional current injection power flow equations. By using the combined power and current injection mismatches method, the number of the equations required can be decreased to only one for each PV bus. The optimization factor is used to improve the iteration process and to ensure the effectiveness of the improved method proposed when the system is ill-conditioned. To verify the effectiveness of the method, the IEEE test systems are tested by conventional current injection method and the improved method proposed separately. Then the results are compared. The comparisons show that the optimization factor improves the convergence character effectively, especially that when the system is at high loading level and R/X ratio, the iteration number is one or two times less than the conventional current injection method. When the overloading condition of the system is serious, the iteration number in this paper appears 4 times less than the conventional current injection method.

  13. New and improved CH implosions at the National Ignition Facility

    Science.gov (United States)

    Hinkel, D. E.; Doeppner, T.; Kritcher, A. L.; Ralph, J. E.; Jarrott, L. C.; Albert, F.; Benedetti, L. R.; Field, J. E.; Goyon, C. S.; Hohenberger, M.; Izumi, N.; Milovich, J. L.; Bachmann, B.; Casey, D. T.; Yeamans, C. B.; Callahan, D. A.; Hurricane, O. A.

    2017-10-01

    Improvements to the hohlraum for CH implosions have resulted in near-record hot spot pressures, 225 Gbar. Implosion symmetry and laser energy coupling are improved by using a hohlraum that, compared to the previous high gas-fill hohlraum, is longer, larger, at lower gas fill density, and is fielded at zero wavelength separation to minimize cross-beam energy transfer. With a capsule at 90% of its original size in this hohlraum, implosion symmetry changes from oblate to prolate, at 33% cone fraction. Simulations highlight improved inner beam propagation as the cause of this symmetry change. These implosions have produced the highest yield for CH ablators at modest power and energy, i.e., 360 TW and 1.4 MJ. Upcoming experiments focus on continued improvement in shape as well as an increase in implosion velocity. Further, results and future plans on an increase in capsule size to improve margin will also be presented. Work performed under the auspices of the U.S. D.O.E. by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  14. Performance Improvement of the Core Protection Calculator System (CPCS) by Introducing Optimal Function Sets

    International Nuclear Information System (INIS)

    Won, Byung Hee; Kim, Kyung O; Kim, Jong Kyung; Kim, Soon Young

    2012-01-01

    The Core Protection Calculator System (CPCS) is an automated device which is adopted to inspect the safety parameters such as Departure from Nuclear Boiling Ratio (DNBR) and Local Power Density (LPD) during normal operation. One function of the CPCS is to predict the axial power distributions using function sets in cubic spline method. Another function of that is to impose penalty when the estimated distribution by the spline method disagrees with embedded data in CPCS (i.e., over 8%). In conventional CPCS, restricted function sets are used to synthesize axial power shape, whereby it occasionally can draw a disagreement between synthesized data and the embedded data. For this reason, the study on improvement for power distributions synthesis in CPCS has been conducted in many countries. In this study, many function sets (more than 18,000 types) differing from the conventional ones were evaluated in each power shape. Matlab code was used for calculating/arranging the numerous cases of function sets. Their synthesis performance was also evaluated through error between conventional data and consequences calculated by new function sets

  15. Improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell

    International Nuclear Information System (INIS)

    Bosevski, T.

    1986-01-01

    An improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell has been developed. Expanding the neutron flux and source into a series of even powers of the radius, one' gets a convenient method for integration of the one-energy group integral transport equation. It is shown that it is possible to perform an analytical integration in the x-y plane in one variable and to use the effective Gaussian integration over another one. Choosing a convenient distribution of space points in fuel and moderator the transport matrix calculation and cell reaction rate integration were condensed. On the basis of the proposed method, the computer program DISKRET for the ZUSE-Z 23 K computer has been written. The suitability of the proposed method for the calculation of the thermal-neutron-flux distribution in a reactor cell can be seen from the test results obtained. Compared with the other collision probability methods, the proposed treatment excels with a mathematical simplicity and a faster convergence. (author)

  16. Assessing DRG cost accounting with respect to resource allocation and tariff calculation: the case of Germany

    Science.gov (United States)

    2012-01-01

    The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the “one hospital” approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the “one hospital” model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital’s cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level. PMID:22935314

  17. Assessing DRG cost accounting with respect to resource allocation and tariff calculation: the case of Germany.

    Science.gov (United States)

    Vogl, Matthias

    2012-08-30

    The purpose of this paper is to analyze the German diagnosis related groups (G-DRG) cost accounting scheme by assessing its resource allocation at hospital level and its tariff calculation at national level. First, the paper reviews and assesses the three steps in the G-DRG resource allocation scheme at hospital level: (1) the groundwork; (2) cost-center accounting; and (3) patient-level costing. Second, the paper reviews and assesses the three steps in G-DRG national tariff calculation: (1) plausibility checks; (2) inlier calculation; and (3) the "one hospital" approach. The assessment is based on the two main goals of G-DRG introduction: improving transparency and efficiency. A further empirical assessment attests high costing quality. The G-DRG cost accounting scheme shows high system quality in resource allocation at hospital level, with limitations concerning a managerially relevant full cost approach and limitations in terms of advanced activity-based costing at patient-level. However, the scheme has serious flaws in national tariff calculation: inlier calculation is normative, and the "one hospital" model causes cost bias, adjustment and representativeness issues. The G-DRG system was designed for reimbursement calculation, but developed to a standard with strategic management implications, generalized by the idea of adapting a hospital's cost structures to DRG revenues. This combination causes problems in actual hospital financing, although resource allocation is advanced at hospital level.

  18. Nuclear calculation of the thorium reactor

    International Nuclear Information System (INIS)

    Hirakawa, Naohiro

    1998-01-01

    Even if for a reactor using thorium (and 233-U), its nuclear design calculation procedure is similar to the case using conventional 235-U, 238-U and plutonium. As nuclear composition varies with time on operation of nuclear reactor, calculation of its mean cross section should be conducted in details. At that time, one-group cross section obtained by integration over a whole of energy range is used for small member group. And, as the nuclear data for a base of its calculation is already prepared by JENDL3.2 and nuclear data library derived from it, the nuclear calculation of a nuclear reactor using thorium has no problem. From such a veiwpoint, IAEA has organized a coordinated research program of 'Potential of Th-based Fuel Cycles to Constrain Pu and to reduce Long-term Waste Toxicities' since 1996. All nations entering this program were regulated so as to institute by selecting a nuclear fuel cycle thinking better by each nation and to examine what cycle is expected by comparing their results. For a promise to conduct such neutral comparison, a comparison of bench mark calculations aiming at PWR was conducted to protect that the obtained results became different because of different calculation method and cross section adopted by each nation. Therefore, it was promoted by entrance of China, Germany, India, Israel, Japan, Korea, Russia and USA. The SWAT system developed by Tohoku University is used for its calculation code, by using which calculated results on the bench mark calculation at the fist and second stages and the nuclear reactor were reported. (G.K.)

  19. Possibilities to improve the adaptation quality of calculated material substitutes

    Energy Technology Data Exchange (ETDEWEB)

    Geske, G.

    1981-04-01

    In calculating the composition of material substitutes by a system of simultaneous equations it is possible, by using a so called quality index, to find out of the set of solutions which generally exists that solution which possesses the best adaptation quality. Further improvement is often possible by describing coherent scattering and photoelectric interaction by an own material parameter for each effect. The exact formulation of these quantities as energy indepedent functions is, however, impossible. Using a set of attenuation coefficients at suitably chosen energies as coefficients for the system of equations the best substitutes are found. The solutions for the investigated example are identical with the original relative to its chemical composition. Such solutions may be of use in connection with neutrons, protons, heavy ions and negative pions. The components taken into consideration must, of course, permit such solutions. These facts are discussed in detail by two examples.

  20. Improvement of correlated sampling Monte Carlo methods for reactivity calculations

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Asaoka, Takumi

    1978-01-01

    Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)

  1. Improving the Calculation of The Potential Between Spherical and Deformed Nuclei

    International Nuclear Information System (INIS)

    Ismail, M.; Ramadan, Kh.A.

    2000-01-01

    The Heavy Ion (HI) interaction potential between spherical and deformed nuclei is improved by calculating its exchange part using finite range nucleon-nucleon (NN) force. We considered U 238 as a target nucleus and seven projectile nuclei to show the dependence of the HI potential on both the energy and orientation of the deformed target nucleus. The effect of finite range NN force has been found to produce significant changes in the HI potential. The variation of the barrier height V B , its thickness and its position R B due to the use of finite range NN force are significant. Such variation enhance the fusion cross-section at energy values just below the Coulomb barrier by a factor increasing with the mass number of projectile nucleus. (author)

  2. Improving the influence function method to take topography into the calculation of mining subsidence

    OpenAIRE

    Cai , Yinfei; Verdel , Thierry; Deck , Olivier; LI , Xiao-Jong

    2016-01-01

    International audience; The classic influence function method is often used in the calculation of mining subsidence caused by stratiform underground excavations. Theoretically,its use is limited to the subsidence predictions under the condition of horizontal ground surface. In order to improve the original influence function method to take topographic variations into account. Due to real-world mining conditions that are usually complicated, it is difficult to separate topography influences fr...

  3. 78 FR 40625 - National School Lunch Program: Direct Certification Continuous Improvement Plans Required by the...

    Science.gov (United States)

    2013-07-08

    ... National School Lunch Program: Direct Certification Continuous Improvement Plans Required by the Healthy... Continuous Improvement Plans Required by the Healthy, Hunger-Free Kids Act of 2010'' on February 22, 2013... performance benchmarks and to develop and implement continuous improvement plans if they fail to do so. The...

  4. #DDOD Use Case: Improve National Death Registry for use with outcomes research

    Data.gov (United States)

    U.S. Department of Health & Human Services — SUMMARY DDOD use case request to improve National Death Registry for use with outcomes research. WHAT IS A USE CASE? A “Use Case” is a request that was made by the...

  5. Magnetic Field Grid Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly...

  6. New possibilities for improving the accuracy of parameter calculations for cascade gamma-ray decay of heavy nuclei

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.; Khitrov, V.A.; Grigor'ev, E.P.

    2002-01-01

    The level density and radiative strength functions which accurately reproduce the experimental intensity of two- step cascades after thermal neutron capture and the total radiative widths of the compound states were applied to calculate the total γ-ray spectra from the (n,γ) reaction. In some cases, analysis showed far better agreement with experiment and gave insight into possible ways in which these parameters need to be corrected for further improvement of calculation accuracy for the cascade γ-decay of heavy nuclei. (author)

  7. Reconciliation of Measured and TRANSP-calculated Neutron Emission Rates in the National Spherical Torus Experiment: Circa 2002-2005

    International Nuclear Information System (INIS)

    Medley, S.S.; Darrow, D.S.; Roquemore, A.L.

    2005-01-01

    A change in the response of the neutron detectors on the National Spherical Torus Experiment occurred between the 2002-2003 and 2004 experimental run periods. An analysis of this behavior by investigating the neutron diagnostic operating conditions and comparing measured and TRANSP-calculated neutron rates is presented. Also a revised procedure for cross calibration of the neutron scintillator detectors with the fission chamber detectors was implemented that delivers good agreement amongst the measured neutron rates for all neutron detectors and all run periods. For L-mode discharges, the measured and TRANSP-calculated neutron rates now match closely for all run years. For H-mode discharges over the entire 2002-2004 period, the 2FG scintillator and fission chamber measurements match each other but imply a neutron deficit of 11.5% relative to the TRANSP-calculated neutron. The results of this report impose a modification on all of the previously used calibration factors for the entire neutron detector suite over the 2002-2004 period. A tabular summary of the new calibration factors is provided including certified calibration factors for the 2005 run

  8. Improved loss calculations for the HDM magnets

    International Nuclear Information System (INIS)

    Mallick, G.T. Jr.; Carr, W.J.; Krefta, M.P.; Johnson, D.

    1994-01-01

    Losses due to ramped fields and currents, quite adequate for the initial design, were calculated previously by Snitchler, Jayakumar, Kovachev, and Orrell for the high energy booster magnets to be used in the SSC. The present analysis considers the loss problem in more detail

  9. Reactor calculations for improving utilization of TRIGA reactor

    International Nuclear Information System (INIS)

    Ravnik, M.

    1986-01-01

    A brief review of our work on reactor calculations of 250 kW TRIGA with mixed core (standard + FLIP fuel) will be presented. The following aspects will be treated: - development of computer programs; - optimization of in-core fuel management with respect to fuel costs and irradiation channels utilization. TRIGAP programme package will be presented as an example of computer programs. It is based on 2-group 1-D diffusion approximation and besides calculations offers possibilities for operational data logging and fuel inventory book-keeping as well. It is developed primarily for the research reactor operators as a tool for analysing reactor operation and fuel management. For this reason it is arranged for a small (PC) computer. Second part will be devoted to reactor physics properties of the mixed cores. Results of depletion calculations will be presented together with measured data to confirm some general guidelines for optimal mixed core fuel management. As the results are obtained using TRIGAP program package results can be also considered as an illustration and qualification for its application. (author)

  10. Improvement of the MSG code for the MONJU evaporators. Additional function of reverse flow calculation on water/steam model and animation for post processing

    International Nuclear Information System (INIS)

    Toda, Shin-ichi; Yoshikawa, Shinji; Oketani, Kazuhiro

    2003-05-01

    The improved version of the MSG code (Multi-dimensional Thermal-hydraulic Analysis Code for Steam Generators) has been released. It has been carried out to improve based on the original version in order to calculate reverse flow on water/steam side, and to animate the post-processing data. To calculate reverse flow locally, modification to set pressure at each divided node point of water/steam region in the helical-coil heat transfer tubes has been carried out. And the matrix solver has been also improved to treat a problem within practical calculation time against increasing the pressure points. In this case pressure and enthalpy have to be calculated simultaneously, however, it was found out that using the block-Jacobean method make a diagonal-dominant matrix, and solve the matrix efficiently with a relaxation method. As the result of calculations of a steady-state condition and a transient of SG blow down with manual trip operation, the improvement on calculation function of the MSG code was confirmed. And an animation function of temperature contour in the sodium shell side as a post processing has been added. Since the animation is very effective to understand thermal-hydraulic behavior on the sodium shell side of the SG, especially in case of transient condition, the analysis and evaluation of the calculation results will be enabled to be more quickly and effectively. (author)

  11. Turning Schools Around: The National Board Certification Process as a School Improvement Strategy. Research Brief

    Science.gov (United States)

    Jaquith, Ann; Snyder, Jon

    2016-01-01

    Can the National Board certification process support school improvement where large proportions of students score below grade level on standardized tests? This SCOPE study examines a project that sought to seize and capitalize upon the learning opportunities embedded in the National Board certification process, particularly opportunities to learn…

  12. Improving bovine udder health: A national mastitis control program in the Netherlands

    NARCIS (Netherlands)

    Lam, T.J.G.M.; Borne, van den B.H.P.; Jansen, J.; Huijps, K.; Veersen, J.C.L.; Schaick, van G.; Hogeveen, H.

    2013-01-01

    Because of increasing bulk milk somatic cell counts and continuous clinical mastitis problems in a substantial number of herds, a national mastitis control program was started in 2005 to improve udder health in the Netherlands. The program started with founding the Dutch Udder Health Centre (UGCN),

  13. IMPROVING MANAGEMENT ACCOUNTING AND COST CALCULATION IN DAIRY INDUSTRY USING STANDARD COST METHOD

    Directory of Open Access Journals (Sweden)

    Bogdănoiu Cristiana-Luminiţa

    2013-04-01

    Full Text Available This paper aims to discuss issues related to the improvement of management accounting in the dairy industry by implementing standard cost method. The methods used today do not provide informational satisfaction to managers in order to conduct effectively production activities, which is why we attempted the standard cost method, it responding to the managers needs to obtain the efficiency of production, and all economic entities. The method allows an operative control of how they consume manpower and material resources by pursuing distinct, permanent and complete deviations during the activity and not at the end of the reporting period. Successful implementation of the standard method depends on the accuracy by which standards are developed and promotes consistently anticipated calculation of production costs as well as determination, tracking and controlling deviations from them, leads to increased practical value of accounting information and business improvement.

  14. Educational strategies aimed at improving student nurse's medication calculation skills: a review of the research literature.

    Science.gov (United States)

    Stolic, Snezana

    2014-09-01

    Medication administration is an important and essential nursing function with the potential for dangerous consequences if errors occur. Not only must nurses understand the use and outcomes of administering medications they must be able to calculate correct dosages. Medication administration and dosage calculation education occurs across the undergraduate program for student nurses. Research highlights inconsistencies in the approaches used by academics to enhance the student nurse's medication calculation abilities. The aim of this integrative review was to examine the literature available on effective education strategies for undergraduate student nurses on medication dosage calculations. A literature search of five health care databases: Sciencedirect, Cinahl, Pubmed, Proquest, Medline to identify journal articles between 1990 and 2012 was conducted. Research articles on medication calculation educational strategies were considered for inclusion in this review. The search yielded 266 papers of which 20 meet the inclusion criteria. A total of 5206 student nurse were included in the final review. The review revealed educational strategies fell into four types of strategies; traditional pedagogy, technology, psychomotor skills and blended learning. The results suggested student nurses showed some benefit from the different strategies; however more improvements could be made. More rigorous research into this area is needed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Preliminary results on food consumption rates for off-site dose calculation of nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Gab Bock; Chung, Yang Geun; Bang, Sun Young; Kang, Duk Won

    2005-01-01

    The Internal dose by food consumption mostly account for radiological dose of public around nuclear power plants(NPP). But, food consumption rate applied to off-site dose calculation in Korea which is the result of field investigation around Kori NPP by the KAERI in 1988. is not reflected of the latest dietary characteristics. The Ministry of Health and Welfare Affairs has investigated the food and nutrition of nations every 3 years based on the Law of National Health Improvement. To update the food consumption rates of the maximum individual, the analysis of the national food investigation results and field surveys around nuclear power plant sites have been carried out

  16. Improved response function calculations for scintillation detectors using an extended version of the MCNP code

    CERN Document Server

    Schweda, K

    2002-01-01

    The analysis of (e,e'n) experiments at the Darmstadt superconducting electron linear accelerator S-DALINAC required the calculation of neutron response functions for the NE213 liquid scintillation detectors used. In an open geometry, these response functions can be obtained using the Monte Carlo codes NRESP7 and NEFF7. However, for more complex geometries, an extended version of the Monte Carlo code MCNP exists. This extended version of the MCNP code was improved upon by adding individual light-output functions for charged particles. In addition, more than one volume can be defined as a scintillator, thus allowing the simultaneous calculation of the response for multiple detector setups. With the implementation of sup 1 sup 2 C(n,n'3 alpha) reactions, all relevant reactions for neutron energies E sub n <20 MeV are now taken into consideration. The results of these calculations were compared to experimental data using monoenergetic neutrons in an open geometry and a sup 2 sup 5 sup 2 Cf neutron source in th...

  17. Plasma fractionation, a useful means to improve national transfusion system and blood safety: Iran experience.

    Science.gov (United States)

    Cheraghali, A M; Abolghasemi, H

    2009-03-01

    In 1974, the government of Iran established Iranian Blood Transfusion Organization (IBTO) as national and centralized transfusion system. Since then donations of blood may not be remunerated and therapy with blood and its components are free of charges for all Iranian patients. Donations are meticulously screened through interviewing donors and lab testing the donations using serological methods. Currently, Iranian donors donate 1735 00 units of blood annually (donation index: 25/1000 population). Implementation of a highly efficient donor selection programme, including donors interview, establishment of confidential unit exclusion programme and laboratory screening of donated bloods by IBTO have led to seroprevalence rates of 0.41%, 0.12% and 0.004% for HBV, HCV and HIV in donated bloods respectively. Since 2004, IBTO has initiated a programme to enter into a contract fractionation agreement for the surplus of recovered plasma produced in its blood collecting centres. Although IBTO has used this project as a mean to improve national transfusion system through upgrading its quality assurance systems, IBTO fractionation project has played a major role in improving availability of plasma-derived medicines in Iran. During 2006-2007, this project furnished the Iran market with 44% and 14% of its needs to the intravenous immunoglobulin and albumin, respectively. Iranian experience showed that contract fractionation of plasma in countries with organized centralized transfusion system, which lack national plasma fractionation facility, in addition to substantial saving on national health resource and enhancing availability of plasma-derived medicines, could serve as a useful means to improve national blood safety profile.

  18. The Prognosis of Political Stability of the Russian Federation on the Basis of Calculation of the Index of National External Economic Stability

    Directory of Open Access Journals (Sweden)

    Владимир Геннадьевич Иванов

    2012-12-01

    Full Text Available The article contains the development of ideas presented in the previous issue of the bulletin. On the basis of the proposed by V.G. Ivanov methodology of calculation of the index of national external economic stability there has been prepared the short- mid-term prognosis of the level of stability of the Russian political regime. With a glance to the specificity of the development of the Russian Federation the methodology of calculation of the deflator of the referred index has been worked out as well.

  19. EPA Sets Schedule to Improve Visibility in the Nation's Most Treasured Natural Areas

    Science.gov (United States)

    EPA issued a schedule to act on more than 40 state pollution reduction plans that will improve visibility in national parks and wilderness areas and protect public health from the damaging effects of the pollutants that cause regional haze.

  20. Short-term variations in core surface flow resolved from an improved method of calculating observatory monthly means

    DEFF Research Database (Denmark)

    Olsen, Nils; Whaler, K. A.; Finlay, Chris

    2014-01-01

    Monthly means of the magnetic field measurements taken by ground observatories are a useful data source for studying temporal changes of the core magnetic field and the underlying core flow. However, the usual way of calculating monthly means as the arithmetic mean of all days (geomagnetic quiet...... as well as disturbed) and all local times (day and night) may result in contributions from external (magnetospheric and ionospheric) origin in the (ordinary, omm) monthly means. Such contamination makes monthly means less favourable for core studies. We calculated revised monthly means (rmm......), and their uncertainties, from observatory hourly means using robust means and after removal of external field predictions, using an improved method for characterising the magnetospheric ring current. The utility of the new method for calculating observatory monthly means is demonstrated by inverting their first...

  1. Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA

    Energy Technology Data Exchange (ETDEWEB)

    Mareuil, Fabien [Institut Pasteur, Cellule d' Informatique pour la Biologie (France); Malliavin, Thérèse E.; Nilges, Michael; Bardiaux, Benjamin, E-mail: bardiaux@pasteur.fr [Institut Pasteur, Unité de Bioinformatique Structurale, CNRS UMR 3528 (France)

    2015-08-15

    In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD–NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD–NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.

  2. Which postoperative complications matter most after bariatric surgery? Prioritizing quality improvement efforts to improve national outcomes.

    Science.gov (United States)

    Daigle, Christopher R; Brethauer, Stacy A; Tu, Chao; Petrick, Anthony T; Morton, John M; Schauer, Philip R; Aminian, Ali

    2018-01-12

    National quality programs have been implemented to decrease the burden of adverse events on key outcomes in bariatric surgery. However, it is not well understood which complications have the most impact on patient health. To quantify the impact of specific bariatric surgery complications on key clinical outcomes. The Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) database. Data from patients who underwent primary bariatric procedures were retrieved from the MBSAQIP 2015 participant use file. The impact of 8 specific complications (bleeding, venous thromboembolism [VTE], leak, wound infection, pneumonia, urinary tract infection, myocardial infarction, and stroke) on 5 main 30-day outcomes (end-organ dysfunction, reoperation, intensive care unit admission, readmission, and mortality) was estimated using risk-adjusted population attributable fractions. The population attributable fraction is a calculated measure taking into account the prevalence and severity of each complication. The population attributable fractions represents the percentage reduction in a given outcome that would occur if that complication were eliminated. In total, 135,413 patients undergoing sleeve gastrectomy (67%), Roux-en-Y gastric bypass (29%), adjustable gastric banding (3%), and duodenal switch (1%) were included. The most common complications were bleeding (.7%), wound infection (.5%), urinary tract infection (.3%), VTE (.3%), and leak (.2%). Bleeding and leak were the largest contributors to 3 of 5 examined outcomes. VTE had the greatest effect on readmission and mortality. This study quantifies the impact of specific complications on key surgical outcomes after bariatric surgery. Bleeding and leak were the complications with the largest overall effect on end-organ dysfunction, reoperation, and intensive care unit admission after bariatric surgery. Furthermore, our findings suggest that an initiative targeting reduction of post-bariatric surgery

  3. Atmospheric mercury footprints of nations.

    Science.gov (United States)

    Liang, Sai; Wang, Yafei; Cinnirella, Sergio; Pirrone, Nicola

    2015-03-17

    The Minamata Convention was established to protect humans and the natural environment from the adverse effects of mercury emissions. A cogent assessment of mercury emissions is required to help implement the Minamata Convention. Here, we use an environmentally extended multi-regional input-output model to calculate atmospheric mercury footprints of nations based on upstream production (meaning direct emissions from the production activities of a nation), downstream production (meaning both direct and indirect emissions caused by the production activities of a nation), and consumption (meaning both direct and indirect emissions caused by final consumption of goods and services in a nation). Results show that nations function differently within global supply chains. Developed nations usually have larger consumption-based emissions than up- and downstream production-based emissions. India, South Korea, and Taiwan have larger downstream production-based emissions than their upstream production- and consumption-based emissions. Developed nations (e.g., United States, Japan, and Germany) are in part responsible for mercury emissions of developing nations (e.g., China, India, and Indonesia). Our findings indicate that global mercury abatement should focus on multiple stages of global supply chains. We propose three initiatives for global mercury abatement, comprising the establishment of mercury control technologies of upstream producers, productivity improvement of downstream producers, and behavior optimization of final consumers.

  4. [The national Dutch Institute for Healthcare Improvement guidelines 'Preoperative trajectory': the essentials].

    Science.gov (United States)

    Wolff, André P; Boermeester, Marja; Janssen, Ingrid; Pols, Margreet; Damen, Johan

    2010-01-01

    In view of the shortcomings of the organisation of the perioperative process that have been ascertained by the Dutch Health Inspectorate (IGZ), the Inspectorate has requested hospitals and care professionals to implement measures to improve this situation. In response to the IGZ's first report, the Dutch Institute for Healthcare Improvement (CBO) has developed the national, multiprofessional guidelines entitled 'Preoperative Trajectory' which were published in January 2010. Implementation of these guidelines should improve communication between professionals and lead to standardization and transparency of the preoperative patient care process, with uniform handovers and clear responsibilities. These guidelines are the first to provide recommendations at process of care level which are intended to increase patient safety and reduce the risk of damage to patients.

  5. Time improvement of photoelectric effect calculation for absorbed dose estimation

    International Nuclear Information System (INIS)

    Massa, J M; Wainschenker, R S; Doorn, J H; Caselli, E E

    2007-01-01

    Ionizing radiation therapy is a very useful tool in cancer treatment. It is very important to determine absorbed dose in human tissue to accomplish an effective treatment. A mathematical model based on affected areas is the most suitable tool to estimate the absorbed dose. Lately, Monte Carlo based techniques have become the most reliable, but they are time expensive. Absorbed dose calculating programs using different strategies have to choose between estimation quality and calculating time. This paper describes an optimized method for the photoelectron polar angle calculation in photoelectric effect, which is significant to estimate deposited energy in human tissue. In the case studies, time cost reduction nearly reached 86%, meaning that the time needed to do the calculation is approximately 1/7 th of the non optimized approach. This has been done keeping precision invariant

  6. A Case Study of Non-Functional Requirements and Continuous Improvement at a National Communications System Contractor

    Science.gov (United States)

    Douglas, Volney L. R.

    2010-01-01

    National communications systems (NCS) are critical elements of a government's infrastructure. Limited improvements to the non-functional requirements (NFR) of NCS have caused issues during national emergencies such as 9/11 and Hurricane Katrina. The literature indicates that these issues result from a deficiency in understanding the roles NFRs and…

  7. Shields calculations for teletherapy equipment. Regulatory approach of the National Center of Nuclear Safety

    International Nuclear Information System (INIS)

    Fuente P, A. de la; Dumenigo G, C.; Quevedo G, J.R.; Lopez F, Y.

    2006-01-01

    The evaluation of applications of construction licenses for the new services of radiotherapy has occupied a significant space in the activity developed by the National Center of Nuclear Safety (CNSN) in the last 2 years. Presently work the experiences of the authors in the evaluation of the required shield for the local where cobalt therapy equipment and lineal accelerators of medical use are used its are exposed, the practical problems detected are approached during the application of the methodologies recommended in both cases and its are discussed which have been the suppositions of items accepted by the Regulatory Authority for the realization of these shield calculations. The accumulated experience allows to assure that the realistic application of the item data and the rational use of the engineering logic makes possible to design local for radiotherapy equipment that fulfill the established dose restrictions in the in use legislation in Cuba, without it implies an excessive expense of construction materials. (Author)

  8. Improved Ground Hydrology Calculations for Global Climate Models (GCMs): Soil Water Movement and Evapotranspiration.

    Science.gov (United States)

    Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.

    1988-09-01

    A physically based ground hydrology model is developed to improve the land-surface sensible and latent heat calculations in global climate models (GCMs). The processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff are explicitly included in the model. The amount of detail in the hydrologic calculations is restricted to a level appropriate for use in a GCM, but each of the aforementioned processes is modeled on the basis of the underlying physical principles. Data from the Goddard Institute for Space Studies (GISS) GCM are used as inputs for off-line tests of the ground hydrology model in four 8° × 10° regions (Brazil, Sahel, Sahara, and India). Soil and vegetation input parameters are calculated as area-weighted means over the 8° × 10° gridhox. This compositing procedure is tested by comparing resulting hydrological quantities to ground hydrology model calculations performed on the 1° × 1° cells which comprise the 8° × 10° gridbox. Results show that the compositing procedure works well except in the Sahel where lower soil water levels and a heterogeneous land surface produce more variability in hydrological quantities, indicating that a resolution better than 8° × 10° is needed for that region. Modeled annual and diurnal hydrological cycles compare well with observations for Brazil, where real world data are available. The sensitivity of the ground hydrology model to several of its input parameters was tested; it was found to be most sensitive to the fraction of land covered by vegetation and least sensitive to the soil hydraulic conductivity and matric potential.

  9. New national emission inventory for navigation in Denmark

    Science.gov (United States)

    Winther, Morten

    This article explains the new emission inventory for navigation in Denmark, covering national sea transport, fisheries and international sea transport. For national sea transport, the new Danish inventory distinguishes between regional ferries, local ferries and other national sea transport. Detailed traffic and technical data lie behind the fleet activity-based fuel consumption and emission calculations for regional ferries. For local ferries and other national sea transport, the new inventory is partly fleet activity based; fuel consumption estimates are calculated for single years, and full fuel consumption coverage is established in a time series by means of appropriate assumptions. For fisheries and international sea transport, the new inventory remains fuel based, using fuel sales data from the Danish Energy Authority (DEA). The new Danish inventory uses specific fuel consumption (sfc) and NO x emission factors as a function of engine type and production year. These factors, which are used directly for regional ferries and, for the remaining navigation categories, are derived by means of appropriate assumptions, serve as a major inventory improvement, necessary for making proper emission trend assessments. International sea transport is the most important fuel consumption and emission source for navigation, and the contributions are large even compared with the overall Danish totals. If the contributions from international sea transport were included in the Danish all-sector totals, the extra contributions in 2005 from fuel consumption (and CO 2), NO x and SO 2 would be 5%, 34% and 167%, respectively. The 1990-2005 changes in fuel consumption as well as NO x and SO 2 emissions for national sea transport (-45, -45, -81), fisheries (-18, 6, -18) and international sea transport (-14, 1, -14) reflect changes in fleet activity/fuel consumption and emission factors. The 2006-2020 emission forecasts demonstrate a need for stricter fuel quality and NO x emission

  10. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    Science.gov (United States)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  11. Emergy Algebra: Improving Matrix Methods for Calculating Tranformities

    Science.gov (United States)

    Transformity is one of the core concepts in Energy Systems Theory and it is fundamental to the calculation of emergy. Accurate evaluation of transformities and other emergy per unit values is essential for the broad acceptance, application and further development of emergy method...

  12. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent Surface Lambertian-Equivalent Reflectivity Calculations

    Science.gov (United States)

    Fasnacht, Zachary; Qin, Wenhan; Haffner, David P.; Loyola, Diego; Joiner, Joanna; Krotkov, Nickolay; Vasilkov, Alexander; Spurr, Robert

    2017-01-01

    Surface Lambertian-equivalent reflectivity (LER) is important for trace gas retrievals in the direct calculation of cloud fractions and indirect calculation of the air mass factor. Current trace gas retrievals use climatological surface LER's. Surface properties that impact the bidirectional reflectance distribution function (BRDF) as well as varying satellite viewing geometry can be important for retrieval of trace gases. Geometry Dependent LER (GLER) captures these effects with its calculation of sun normalized radiances (I/F) and can be used in current LER algorithms (Vasilkov et al. 2016). Pixel by pixel radiative transfer calculations are computationally expensive for large datasets. Modern satellite missions such as the Tropospheric Monitoring Instrument (TROPOMI) produce very large datasets as they take measurements at much higher spatial and spectral resolutions. Look up table (LUT) interpolation improves the speed of radiative transfer calculations but complexity increases for non-linear functions. Neural networks perform fast calculations and can accurately predict both non-linear and linear functions with little effort.

  13. Spin-off strategies for the improvement of the performance national nuclear R and D project

    International Nuclear Information System (INIS)

    Lee, T. J.; Kim, H. J.; Jung, H. S.; Yang, M. H.; Choi, Y. M.

    1998-01-01

    In the light of the strategic utilization of the national R and D projects, this paper is to induce the spin-off strategies to improve the national R and D effectiveness through analyzing the spin-off characteristics of nuclear technologies, the spin-off status of the advanced countries and the case study of Korean nuclear spin-offs. Spin-off process is viewed as a three-stage operation, such as preparation stage, implementation stage and maintenance stage. In order to find the correlation between the influencing factors and spin-off effectiveness, the Spearman's correlation coefficient was employed as a specific statistical technique. By integrating this correlation, spin-off process and spin-off strategies, this paper presents an efficient frame work to improve the spin-off effectiveness

  14. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  15. Harnessing the Power of Collaborative Relationships to Improve National Preparedness and Responsiveness

    Science.gov (United States)

    2011-09-01

    Journal of Qualtiy Improvement, 24: 518–540 McLaughlin, D. and LeCompte, M. (1993). Ethnography and qualitative design in educational research (2nd...METHODOLOGY This thesis uses a combination of the ethnographic research methods and qualitative data analysis including participant observation and...Ethnographic Research Center: http://www.nps.gov/history/ ethnography /aah/AAheritage/ERCf.htm NGB. (2011, March 7). National Guard News. Retrieved July

  16. Summary and recommendations of a National Cancer Institute workshop on issues limiting the clinical use of Monte Carlo dose calculation algorithms for megavoltage external beam radiation therapy

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; Smathers, James; Deye, James

    2003-01-01

    Due to the significant interest in Monte Carlo dose calculations for external beam megavoltage radiation therapy from both the research and commercial communities, a workshop was held in October 2001 to assess the status of this computational method with regard to use for clinical treatment planning. The Radiation Research Program of the National Cancer Institute, in conjunction with the Nuclear Data and Analysis Group at the Oak Ridge National Laboratory, gathered a group of experts in clinical radiation therapy treatment planning and Monte Carlo dose calculations, and examined issues involved in clinical implementation of Monte Carlo dose calculation methods in clinical radiotherapy. The workshop examined the current status of Monte Carlo algorithms, the rationale for using Monte Carlo, algorithmic concerns, clinical issues, and verification methodologies. Based on these discussions, the workshop developed recommendations for future NCI-funded research and development efforts. This paper briefly summarizes the issues presented at the workshop and the recommendations developed by the group

  17. Improvement in decay ratio calculation in LAPUR5 methodology for BWR instability

    International Nuclear Information System (INIS)

    Li Hsuannien; Yang Tzungshiue; Shih Chunkuan; Wang Jongrong; Lin Haotzu

    2009-01-01

    LAPUR5, based on frequency domain approach, is a computer code that analyzes the core stability and calculates decay ratios (DRs) of boiling water nuclear reactors. In current methodology, one set of parameters (three friction multipliers and one density reactivity coefficient multiplier) is chosen for LAPUR5 input files, LAPURX and LAPURW. The calculation stops and DR for this particular set of parameters is obtained when the convergence criteria (pressure, mass flow rate) are first met. However, there are other sets of parameters which could also meet the same convergence criteria without being identified. In order to cover these ranges of parameters, we developed an improved procedure to calculate DR in LAPUR5. First, we define the ranges and increments of those dominant input parameters in the input files for DR loop search. After LAPUR5 program execution, we can obtain all DRs for every set of parameters which satisfy the converge criteria in one single operation. The part for loop search procedure covers those steps in preparing LAPURX and LAPURW input files. As a demonstration, we looked into the reload design of Kuosheng Unit 2 Cycle 22. We found that the global DR has a maximum at exposure of 9070 MWd/t and the regional DR has a maximum at exposure of 5770 MWd/t. It should be noted that the regional DR turns out to be larger than the global ones for exposures less than 5770 MWd/t. Furthermore, we see that either global or regional DR by the loop search method is greater than the corresponding values from our previous approach. It is concluded that the loop search method can reduce human error and save human labor as compared with the previous version of LAPUR5 methodology. Now the maximum DR can be effectively obtained for a given plant operating conditions and a more precise stability boundary, with less uncertainty, can be plotted on plant power/flow map. (author)

  18. First principle calculations for improving desorption temperature in ...

    Indian Academy of Sciences (India)

    5Institute of Nanomaterials and Nanotechnology, MAScIR, Rabat 10000, Morocco. 6Hassan II Academy of Science and Technology, Rabat 10000, Morocco. 7Institut Néel, CNRS-UJF, 38042 Grenoble cedex 9, France. MS received 26 June 2013; revised 25 December 2013. Abstract. Using ab initio calculations, we predict ...

  19. Improvements to the National Transport Code Collaboration Data Server

    Science.gov (United States)

    Alexander, David A.

    2001-10-01

    The data server of the National Transport Code Colaboration Project provides a universal network interface to interpolated or raw transport data accessible by a universal set of names. Data can be acquired from a local copy of the Iternational Multi-Tokamak (ITER) profile database as well as from TRANSP trees of MDS Plus data systems on the net. Data is provided to the user's network client via a CORBA interface, thus providing stateful data server instances, which have the advantage of remembering the desired interpolation, data set, etc. This paper will review the status and discuss the recent improvements made to the data server, such as the modularization of the data server and the addition of hdf5 and MDS Plus data file writing capability.

  20. Calculate Your Body Mass Index

    Science.gov (United States)

    ... Can! ) Health Professional Resources Calculate Your Body Mass Index Body mass index (BMI) is a measure of body fat based ... Health Information Email Alerts Jobs and Careers Site Index About NHLBI National Institute of Health Department of ...

  1. Experimentally Determining β-Decay Intensities for 103,104Nb to Improve R-process Calculations

    Science.gov (United States)

    Gombas, J.; Deyoung, P. D.; Spyrou, A.; Dombos, A. C.; Lyons, S.; SuN Collaboration

    2017-09-01

    The rapid neutron capture process (r-process) is responsible for the formation of nuclei heavier than iron. This process is theorized to occur in supernovas and/or neutron star mergers. R-process calculations require the accurate knowledge of a significant amount of nuclear properties, the majority of which are not known experimentally. Nuclear masses, β-decay properties and neutron-capture reactions are all input ingredients into r-process models. This present study focuses on the β decay of 103Nb and 104Nb. The β decay of 103Nb and 104Nb, two nuclei found in the r-process, were observed at the NSCL using the Summing NaI (SuN) detector. An unstable beam implanted inside SuN. The γ rays were measured in coincidence with the emitted electrons. The β-decay intensity function was then extracted. The experimentally determined functions for 103Nb and 104Nb will be compared to predictions made by the Quasi Random Phase Approximation (QRPA) model. These theoretical calculations are used in astrophysical models of the r-process. This comparison will lead to a better understanding of the nuclear structure for 103Nb and 104Nb. A more dependable prediction of the formation of heavier nuclei birthed from supernovas or neutron star mergers can then be made. This material is based upon work supported by the National Science Foundation under Grant No. PHY-1613188 and PHY-1306074, and by the Hope College Department of Physics Guess Research Fund.

  2. Point kinetics improvements to evaluate three-dimensional effects in transients calculation

    International Nuclear Information System (INIS)

    Castellotti, U.

    1987-01-01

    A calculation method, which considers the flux axial perturbations in the parameters related to the reactivity within a point kinetics model, is described. The method considered uses axial factors of consideration which act on the thermohydraulic variables included in the reactivity calculation. The PUMA three-dimensional code as reference model for the comparisons, is used. The limitations inherent to the reactivity balance of the point models used in the transients calculation, are given. (Author)

  3. Improving the calculated core stability by the core nuclear design optimization

    International Nuclear Information System (INIS)

    Partanen, P.

    1995-01-01

    Three different equilibrium core loadings for TVO II reactor have been generated in order to improve the core stability properties at uprated power level. The reactor thermal power is assumed to be uprated from 2160 MW th to 2500 MW th , which moves the operating point after a rapid pump rundown where the core stability has been calculated from 1340 MW th and 3200 kg/s to 1675 MW th and 4000 kg/s. The core has been refuelled with ABB Atom Svea-100 -fuel, which has 3,64% w/o U-235 average enrichment in the highly enriched zone. PHOENIX lattice code has been used to provide the homogenized nuclear constants. POLCA4 static core simulator has been used for core loadings and cycle simulations and RAMONA-3B program for simulating the dynamic response to the disturbance for which the stability behaviour has been evaluated. The core decay ratio has been successfully reduced from 0,83 to 0,55 mainly by reducing the power peaking factors. (orig.) (7 figs., 1 tab.)

  4. Iterative metal artifact reduction improves dose calculation accuracy. Phantom study with dental implants

    Energy Technology Data Exchange (ETDEWEB)

    Maerz, Manuel; Mittermair, Pia; Koelbl, Oliver; Dobler, Barbara [Regensburg University Medical Center, Department of Radiotherapy, Regensburg (Germany); Krauss, Andreas [Siemens Healthcare GmbH, Forchheim (Germany)

    2016-06-15

    Metallic dental implants cause severe streaking artifacts in computed tomography (CT) data, which affect the accuracy of dose calculations in radiation therapy. The aim of this study was to investigate the benefit of the metal artifact reduction algorithm iterative metal artifact reduction (iMAR) in terms of correct representation of Hounsfield units (HU) and dose calculation accuracy. Heterogeneous phantoms consisting of different types of tissue equivalent material surrounding metallic dental implants were designed. Artifact-containing CT data of the phantoms were corrected using iMAR. Corrected and uncorrected CT data were compared to synthetic CT data to evaluate accuracy of HU reproduction. Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were calculated in Oncentra v4.3 on corrected and uncorrected CT data and compared to Gafchromic trademark EBT3 films to assess accuracy of dose calculation. The use of iMAR increased the accuracy of HU reproduction. The average deviation of HU decreased from 1006 HU to 408 HU in areas including metal and from 283 HU to 33 HU in tissue areas excluding metal. Dose calculation accuracy could be significantly improved for all phantoms and plans: The mean passing rate for gamma evaluation with 3 % dose tolerance and 3 mm distance to agreement increased from 90.6 % to 96.2 % if artifacts were corrected by iMAR. The application of iMAR allows metal artifacts to be removed to a great extent which leads to a significant increase in dose calculation accuracy. (orig.) [German] Metallische Implantate verursachen streifenfoermige Artefakte in CT-Bildern, welche die Dosisberechnung beeinflussen. In dieser Studie soll der Nutzen des iterativen Metall-Artefakt-Reduktions-Algorithmus iMAR hinsichtlich der Wiedergabetreue von Hounsfield-Werten (HU) und der Genauigkeit von Dosisberechnungen untersucht werden. Es wurden heterogene Phantome aus verschiedenen Arten gewebeaequivalenten Materials mit

  5. Study to improve the precision of calculation of split renal clearance by gamma camera method using 99mTc-MAG3

    International Nuclear Information System (INIS)

    Mimura, Hiroaki; Tomomitsu, Tatsushi; Yanagimoto, Shinichi

    1999-01-01

    Both fundamental and clinical studies were performed to improve the precision with which split renal clearance is calculated from the relation between renal clearance and the total renal uptake rate by using 99m Tc-MAG 3 , which is mainly excreted into the proximal renal tubules. In the fundamental study, the most suitable kidney phantom threshold values for the extracted renal outline were investigated with regard to size, radioactivity, depth of the kidney phantom, and radioactivity in the background. In the clinical study, suitable timing to obtain additional images for making the ROI and the standard point for calculation of renal uptake rate were investigated. The results indicated that, although suitable threshold values were distributed from 25% to 45%, differences in size, solution activity, and the position of the phantom or BG activity did not have significant effects. Comparing 1-3 min with 2-5 min as the time for additional images for ROI, we found that renal areas using the former time showed higher values, and the correlation coefficient of the regression formula improved significantly. Comparison of the timing for the start of data acquisition with the end of the arterial phase as a standard point of calculating renal uptake rate showed improvement in the latter. (author)

  6. Secondary Data Analysis of National Surveys in Japan Toward Improving Population Health

    Science.gov (United States)

    Ikeda, Nayu

    2016-01-01

    Secondary data analysis of national health surveys of the general population is a standard methodology for health metrics and evaluation; it is used to monitor trends in population health over time and benchmark the performance of health systems. In Japan, the government has established electronic databases of individual records from national surveys of the population’s health. However, the number of publications based on these datasets is small considering the scale and coverage of the surveys. There appear to be two major obstacles to the secondary use of Japanese national health survey data: strict data access control under the Statistics Act and an inadequate interdisciplinary research environment for resolving methodological difficulties encountered when dealing with secondary data. The usefulness of secondary analysis of survey data is evident with examples from the author’s previous studies based on vital records and the National Health and Nutrition Surveys, which showed that (i) tobacco smoking and high blood pressure are the major risk factors for adult mortality from non-communicable diseases in Japan; (ii) the decrease in mean blood pressure in Japan from the late 1980s to the early 2000s was partly attributable to the increased use of antihypertensive medication and reduced dietary salt intake; and (iii) progress in treatment coverage and control of high blood pressure is slower in Japan than in the United States and Britain. National health surveys in Japan are an invaluable asset, and findings from secondary analyses of these surveys would provide important suggestions for improving health in people around the world. PMID:26902170

  7. Can the CDCC calculation be improved?

    Science.gov (United States)

    Rawitscher, George; Koltracht, Israel

    2005-04-01

    The Continuum Discretized Coupled Channels method of including breakup effects in the calculation of nuclear reactions, when applied to unstable nuclei, requires the inclusion of a large number of coupled channels, and the numerical computational effort increases correspondingly. The computing time with traditional finite difference techniques [1] scales with the cube of the number of channels N. The scaling with a new spectral integral method (SIEM) [2] of solving coupled equations is likewise N^3. However, the structure of the matrices that occur in the numerical algorithm of the SIEM is different from that of the finite difference methods, and lends itself well to iterative solutions, reducing the numerical complexity to N^ 2 times the number of required iterations. Various iterative schemes will be considered, and their convergence properties will be examined. [1] I. J. Thompson, code FRESCO, Comp. Phys. Rep. 7, 167 (1988);[2] R. A. Gonzales, S. -Y. Kang, I. Koltracht and G. Rawitscher, J. of Comput. Phys. 153, 160 (1999).

  8. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  9. IMRT: Improvement in treatment planning efficiency using NTCP calculation independent of the dose-volume-histogram

    International Nuclear Information System (INIS)

    Grigorov, Grigor N.; Chow, James C.L.; Grigorov, Lenko; Jiang, Runqing; Barnett, Rob B.

    2006-01-01

    The normal tissue complication probability (NTCP) is a predictor of radiobiological effect for organs at risk (OAR). The calculation of the NTCP is based on the dose-volume-histogram (DVH) which is generated by the treatment planning system after calculation of the 3D dose distribution. Including the NTCP in the objective function for intensity modulated radiation therapy (IMRT) plan optimization would make the planning more effective in reducing the postradiation effects. However, doing so would lengthen the total planning time. The purpose of this work is to establish a method for NTCP determination, independent of a DVH calculation, as a quality assurance check and also as a mean of improving the treatment planning efficiency. In the study, the CTs of ten randomly selected prostate patients were used. IMRT optimization was performed with a PINNACLE3 V 6.2b planning system, using planning target volume (PTV) with margins in the range of 2 to 10 mm. The DVH control points of the PTV and OAR were adapted from the prescriptions of Radiation Therapy Oncology Group protocol P-0126 for an escalated prescribed dose of 82 Gy. This paper presents a new model for the determination of the rectal NTCP ( R NTCP). The method uses a special function, named GVN (from Gy, Volume, NTCP), which describes the R NTCP if 1 cm 3 of the volume of intersection of the PTV and rectum (R int ) is irradiated uniformly by a dose of 1 Gy. The function was 'geometrically' normalized using a prostate-prostate ratio (PPR) of the patients' prostates. A correction of the R NTCP for different prescribed doses, ranging from 70 to 82 Gy, was employed in our model. The argument of the normalized function is the R int , and parameters are the prescribed dose, prostate volume, PTV margin, and PPR. The R NTCPs of another group of patients were calculated by the new method and the resulting difference was <±5% in comparison to the NTCP calculated by the PINNACLE3 software where Kutcher's dose

  10. IMPLEMENTATION OF SERIOUS GAMES INSPIRED BY BALURAN NATIONAL PARK TO IMPROVE STUDENTS’ CRITICAL THINKING ABILITY

    Directory of Open Access Journals (Sweden)

    P. D. A. Putra

    2016-04-01

    Full Text Available The purpose of this study is to implement Baluran National Park-based Serious Game to enhance the students' creative thinking skill and motivation to learn. The subject of the study were 60 students of SMP Negeri 1 Asembagus, Situbondo regency. The sample was divided into three groups. Two groups were chosen as experimental classes and the other group as the control class. Both of the experimental groups were given treatment using serious game based on Baluran National Park. The instruments used were observation sheet, pre-test, and post-test. Baluran National Parks-based serious game was effective in improving the students' creative thinking skill and motivation to learn science subjects.

  11. Improvement of the bubble rise velocity model in the pressurizer using ALMOD 3 computer code to calculate evaporation

    International Nuclear Information System (INIS)

    Madeira, A.A.

    1985-01-01

    It's studied the improvement for the calculation of bubble rise velocity, by adding two different ways to estimate this velocity, one of which more adequate to pressures normally found in the Reactor Cooling System. Additionally, a limitation in bubble rise velocity growth was imposed, to account for the actual behavior of bubble rise in two-phase mixtures. (Author) [pt

  12. Work plan for improving the DARWIN2.3 depleted material balance calculation of nuclides of interest for the fuel cycle

    Directory of Open Access Journals (Sweden)

    Rizzo Axel

    2017-01-01

    Full Text Available DARWIN2.3 is the reference package used for fuel cycle applications in France. It solves the Boltzmann and Bateman equations in a coupling way, with the European JEFF-3.1.1 nuclear data library, to compute the fuel cycle values of interest. It includes both deterministic transport codes APOLLO2 (for light water reactors and ERANOS2 (for fast reactors, and the DARWIN/PEPIN2 depletion code, each of them being developed by CEA/DEN with the support of its industrial partners. The DARWIN2.3 package has been experimentally validated for pressurized and boiling water reactors, as well as for sodium fast reactors; this experimental validation relies on the analysis of post-irradiation experiments (PIE. The DARWIN2.3 experimental validation work points out some isotopes for which the depleted concentration calculation can be improved. Some other nuclides have no available experimental validation, and their concentration calculation uncertainty is provided by the propagation of a priori nuclear data uncertainties. This paper describes the work plan of studies initiated this year to improve the accuracy of the DARWIN2.3 depleted material balance calculation concerning some nuclides of interest for the fuel cycle.

  13. Work plan for improving the DARWIN2.3 depleted material balance calculation of nuclides of interest for the fuel cycle

    Science.gov (United States)

    Rizzo, Axel; Vaglio-Gaudard, Claire; Martin, Julie-Fiona; Noguère, Gilles; Eschbach, Romain

    2017-09-01

    DARWIN2.3 is the reference package used for fuel cycle applications in France. It solves the Boltzmann and Bateman equations in a coupling way, with the European JEFF-3.1.1 nuclear data library, to compute the fuel cycle values of interest. It includes both deterministic transport codes APOLLO2 (for light water reactors) and ERANOS2 (for fast reactors), and the DARWIN/PEPIN2 depletion code, each of them being developed by CEA/DEN with the support of its industrial partners. The DARWIN2.3 package has been experimentally validated for pressurized and boiling water reactors, as well as for sodium fast reactors; this experimental validation relies on the analysis of post-irradiation experiments (PIE). The DARWIN2.3 experimental validation work points out some isotopes for which the depleted concentration calculation can be improved. Some other nuclides have no available experimental validation, and their concentration calculation uncertainty is provided by the propagation of a priori nuclear data uncertainties. This paper describes the work plan of studies initiated this year to improve the accuracy of the DARWIN2.3 depleted material balance calculation concerning some nuclides of interest for the fuel cycle.

  14. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  15. Global nuclear-structure calculations

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.

    1990-01-01

    The revival of interest in nuclear ground-state octupole deformations that occurred in the 1980's was stimulated by observations in 1980 of particularly large deviations between calculated and experimental masses in the Ra region, in a global calculation of nuclear ground-state masses. By minimizing the total potential energy with respect to octupole shape degrees of freedom in addition to ε 2 and ε 4 used originally, a vastly improved agreement between calculated and experimental masses was obtained. To study the global behavior and interrelationships between other nuclear properties, we calculate nuclear ground-state masses, spins, pairing gaps and Β-decay and half-lives and compare the results to experimental qualities. The calculations are based on the macroscopic-microscopic approach, with the microscopic contributions calculated in a folded-Yukawa single-particle potential

  16. Complex calculation and improvement of beam shaping and accelerating system of the ''Sokol'' small-size electrostatic accelerator

    International Nuclear Information System (INIS)

    Simonenko, A.V.; Pistryak, V.M.; Zats, A.V.; Levchenko, Yu.Z.; Kuz'menko, V.V.

    1987-01-01

    Features of charged particle accelerated beam shaping in the electrostatic part of the ''Sokol'' small-size accelerator are considered in complex taking into account the electrode real geometry. Effect of the extracting, accelerating electorde potential and accelerator total voltage on beam behaviour is investigated. A modified variation of the beam shaping system, allowing to decrease 2 times the required interval of accelerating electrode potential adjustment and to decrease the beam size in the starting acceleration region, is presented. It permits to simplify the construction and to improve accelerator operation. Comparison of experimental and calculational data on the beam in the improved accelerator variation is carried out. Effect of peripheral parts of accelerating tube electrodes on the beam is investigated

  17. Improved and consistent determination of the nuclear inventory of spent PWR-fuel on the basis of time-dependent cell-calculations with KORIGEN

    International Nuclear Information System (INIS)

    Fischer, U.; Wiese, H.W.

    1983-01-01

    For safe handling, processing and storage of spent nuclear fuel a reliable, experimentally validated method is needed to determine fuel and waste characteristics: composition, radioactivity, heat and radiation. For PWR's, a cell-burnup procedure has been developed which is able to calculate the inventory in consistency with cell geometry, initial enrichment, and reactor control. Routine calculations can be performed with KORIGEN using consistent cross-section sets - burnup-dependent and based on the latest Karlsruhe evaluations for actinides - which were calculated previously with the cell-burnup procedure. Extensive comparisons between calculations and experiments validate the presented procedure. For the use of the KORIGEN code the input description and sample problems are added. Improvements in the calculational method and in data are described, results from KORIGEN, ORIGEN and ORIGEN2 calculations are compared. Fuel and waste inventories are given for BIBLIS-type fuel of different burnup. (orig.) [de

  18. RA-0 reactor. New neutronic calculations

    International Nuclear Information System (INIS)

    Rumis, D.; Leszczynski, F.

    1990-01-01

    An updating of the neutronic calculations performed at the RA-0 reactor, located at the Natural, Physical and Exact Sciences Faculty of Cordoba National University, are herein described. The techniques used for the calculation of a reactor like the RA-0 allows prediction in detail of the flux behaviour in the core's interior and in the reflector, which will be helpful for experiments design. In particular, the use of WIMSD4 code to make calculations on the reactor implies a novelty in the possible applications of this code to solve the problems that arise in practice. (Author) [es

  19. Development of My Footprint Calculator

    Science.gov (United States)

    Mummidisetti, Karthik

    The Environmental footprint is a very powerful tool that helps an individual to understand how their everyday activities are impacting environmental surroundings. Data shows that global climate change, which is a growing concern for nations all over the world, is already affecting humankind, plants and animals through raising ocean levels, droughts & desertification and changing weather patterns. In addition to a wide range of policy measures implemented by national and state governments, it is necessary for individuals to understand the impact that their lifestyle may have on their personal environmental footprint, and thus over the global climate change. "My Footprint Calculator" (myfootprintcalculator.com) has been designed to be one the simplest, yet comprehensive, web tools to help individuals calculate and understand their personal environmental impact. "My Footprint Calculator" is a website that queries users about their everyday habits and activities and calculates their personal impact on the environment. This website was re-designed to help users determine their environmental impact in various aspects of their lives ranging from transportation and recycling habits to water and energy usage with the addition of new features that will allow users to share their experiences and their best practices with other users interested in reducing their personal Environmental footprint. The collected data is stored in the database and a future goal of this work plans to analyze the collected data from all users (anonymously) for developing relevant trends and statistics.

  20. Estimating the Number of Heterosexual Persons in the United States to Calculate National Rates of HIV Infection.

    Directory of Open Access Journals (Sweden)

    Amy Lansky

    Full Text Available This study estimated the proportions and numbers of heterosexuals in the United States (U.S. to calculate rates of heterosexually acquired human immunodeficiency virus (HIV infection. Quantifying the burden of disease can inform effective prevention planning and resource allocation.Heterosexuals were defined as males and females who ever had sex with an opposite-sex partner and excluded those with other HIV risks: persons who ever injected drugs and males who ever had sex with another man. We conducted meta-analysis using data from 3 national probability surveys that measured lifetime (ever sexual activity and injection drug use among persons aged 15 years and older to estimate the proportion of heterosexuals in the United States population. We then applied the proportion of heterosexual persons to census data to produce population size estimates. National HIV infection rates among heterosexuals were calculated using surveillance data (cases attributable to heterosexual contact in the numerators and the heterosexual population size estimates in the denominators.Adult and adolescent heterosexuals comprised an estimated 86.7% (95% confidence interval: 84.1%-89.3% of the U.S. population. The estimate for males was 84.1% (CI: 81.2%-86.9% and for females was 89.4% (95% CI: 86.9%-91.8%. The HIV diagnosis rate for 2013 was 5.2 per 100,000 heterosexuals and the rate of persons living with diagnosed HIV infection in 2012 was 104 per 100,000 heterosexuals aged 13 years or older. Rates of HIV infection were >20 times as high among black heterosexuals compared to white heterosexuals, indicating considerable disparity. Rates among heterosexual men demonstrated higher disparities than overall population rates for men.The best available data must be used to guide decision-making for HIV prevention. HIV rates among heterosexuals in the U.S. are important additions to cost effectiveness and other data used to make critical decisions about resources for

  1. Teacher Improvement Projects in Guinea: Lessons Learned from Taking a Program to National Scale.

    Science.gov (United States)

    Schwille, John; Dembele, Martial; Diallo, Alpha Mahmoudou

    2001-01-01

    Highlights lessons learned from a small, grant-funded teacher improvement project in Guinea that went nationwide, including: it is possible to make such a system work on a national scale in a resource-scarce country; effective initial and continued training is critical for all participants; it is difficult to provide close-to-school assistance…

  2. Method for consequence calculations for servere accidents

    International Nuclear Information System (INIS)

    Nielsen, F.

    1987-01-01

    With the exception of the part about collective doses, this report was commissioned by the Swedish State Power Board. The part about collective doses was commissioned by the Swedish National Institute of Radiation Protection. The report contains a calculation of radiation doses in the sursurroundings caused by a theoretical core meltdown accident at one of the Barsebaeck reactors with filtered venting through the FILTRA plant. The calculations were made by means of the PLUCON4 code. The assumption used for the calculations were givon by the Swedish National Institute of Radiation Protection as follows: Pasquill D with wind speed 3 m/s and a mixing layer at 300 m height. Elevation of the release: 100 m with no energy release. The release starts 12 hours after shut-down and its duration is one hour. The release contains 100% of the noble gasses and 0,1% of all other isotopes in a 1800 MW t -reactor. (author)

  3. A New Thermodynamic Calculation Method for Binary Alloys: Part I: Statistical Calculation of Excess Functions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The improved form of calculation formula for the activities of the components in binary liquids and solid alloys has been derived based on the free volume theory considering excess entropy and Miedema's model for calculating the formation heat of binary alloys. A calculation method of excess thermodynamic functions for binary alloys, the formulas of integral molar excess properties and partial molar excess properties for solid ordered or disordered binary alloys have been developed. The calculated results are in good agreement with the experimental values.

  4. Methods for reactor physics calculations for control rods in fast reactors

    International Nuclear Information System (INIS)

    Grimstone, M.J.; Rowlands, J.L.

    1988-12-01

    The IAEA Specialists' Meeting on ''Methods for Reactor Physics Calculations for Control Rods in Fast Reactors'' was held in Winfrith, United Kingdom, on 6-8 December, 1988. The meeting was attended by 23 participants from nine countries. The purpose of the meeting was to review the current calculational methods and their accuracy as assessed by theoretical studies and comparisons with measurements, and then to identify the requirements for improved methods or additional studies and comparisons. The control rod properties or effects to be considered were their reactivity worths, their effect on the power distribution through the core, and the reaction rates and energy deposition both within and adjacent to the rods. The meeting was divided into five sessions, in the first of which each national delegation presented a brief overview of their programme of work on calculational methods for fast reactor control rods. In the next three sessions a total of seventeen papers were presented describing calculational methods and assessments of their accuracy. The final session was a discussion to draw conclusions regarding the current status of methods and the further developments and validation work required. A separate abstract was prepared for each of the 23 papers presented at the meeting. Refs, figs and tabs

  5. Accurate alpha sticking fractions from improved calculations relevant for muon catalyzed fusion

    International Nuclear Information System (INIS)

    Szalewicz, K.

    1990-05-01

    Recent experiments have shown that under proper conditions a single muon may catalyze almost two hundred fusions in its lifetime. This process proceeds through formation of muonic molecular ions. Properties of these ions are central to the understanding of the phenomenon. Our work included the most accurate calculations of the energy levels and Coulombic sticking fractions for tdμ and other muonic molecular ions, calculations of Auger transition rates, calculations of corrections to the energy levels due to interactions with the most molecule, and calculation of the reactivation of muons from α particles. The majority of our effort has been devoted to the theory and computation of the influence of the strong nuclear forces on fusion rates and sticking fractions. We have calculated fusion rates for tdμ including the effects of nuclear forces on the molecular wave functions. We have also shown that these results can be reproduced to almost four digit accuracy by using a very simple quasifactorizable expression which does not require modifications of the molecular wave functions. Our sticking fractions are more accurate than any other theoretical values. We have used a more sophisticated theory than any other work and our numerical calculations have converged to at least three significant digits

  6. Sepsis in general surgery: the 2005-2007 national surgical quality improvement program perspective.

    Science.gov (United States)

    Moore, Laura J; Moore, Frederick A; Todd, S Rob; Jones, Stephen L; Turner, Krista L; Bass, Barbara L

    2010-07-01

    To document the incidence, mortality rate, and risk factors for sepsis and septic shock compared with pulmonary embolism and myocardial infarction in the general-surgery population. Retrospective review. American College of Surgeons National Surgical Quality Improvement Program institutions. General-surgery patients in the 2005-2007 National Surgical Quality Improvement Program data set. Incidence, mortality rate, and risk factors for sepsis and septic shock. Of 363 897 general-surgery patients, sepsis occurred in 8350 (2.3%), septic shock in 5977 (1.6%), pulmonary embolism in 1078 (0.3%), and myocardial infarction in 615 (0.2%). Thirty-day mortality rates for each of the groups were as follows: 5.4% for sepsis, 33.7% for septic shock, 9.1% for pulmonary embolism, and 32.0% for myocardial infarction. The septic-shock group had a greater percentage of patients older than 60 years (no sepsis, 40.2%; sepsis, 51.7%; and septic shock, 70.3%; P surgery resulted in more cases of sepsis (4.5%) and septic shock (4.9%) than did elective surgery (sepsis, 2.0%; septic shock, 1.2%) (P surgery, and the presence of any comorbidity. This study emphasizes the need for early recognition of patients at risk via aggressive screening and the rapid implementation of evidence-based guidelines.

  7. Subcritical calculation of the nuclear material warehouse

    International Nuclear Information System (INIS)

    Garcia M, T.; Mazon R, R.

    2009-01-01

    In this work the subcritical calculation of the nuclear material warehouse of the Reactor TRIGA Mark III labyrinth in the Mexico Nuclear Center is presented. During the adaptation of the nuclear warehouse (vault I), the fuel was temporarily changed to the warehouse (vault II) and it was also carried out the subcritical calculation for this temporary arrangement. The code used for the calculation of the effective multiplication factor, it was the Monte Carlo N-Particle Extended code known as MCNPX, developed by the National Laboratory of Los Alamos, for the particles transport. (Author)

  8. Time trends, improvements and national auditing of rectal cancer management over an 18-year period.

    Science.gov (United States)

    Kodeda, K; Johansson, R; Zar, N; Birgisson, H; Dahlberg, M; Skullman, S; Lindmark, G; Glimelius, B; Påhlman, L; Martling, A

    2015-09-01

    The main aims were to explore time trends in the management and outcome of patients with rectal cancer in a national cohort and to evaluate the possible impact of national auditing on overall outcomes. A secondary aim was to provide population-based data for appraisal of external validity in selected patient series. Data from the Swedish ColoRectal Cancer Registry with virtually complete national coverage were utilized in this cohort study on 29 925 patients with rectal cancer diagnosed between 1995 and 2012. Of eligible patients, nine were excluded. During the study period, overall, relative and disease-free survival increased. Postoperative mortality after 30 and 90 days decreased to 1.7% and 2.9%. The 5-year local recurrence rate dropped to 5.0%. Resection margins improved, as did peri-operative blood loss despite more multivisceral resections being performed. Fewer patients underwent palliative resection and the proportion of non-operated patients increased. The proportions of temporary and permanent stoma formation increased. Preoperative radiotherapy and chemoradiotherapy became more common as did multidisciplinary team conferences. Variability in rectal cancer management between healthcare regions diminished over time when new aspects of patient care were audited. There have been substantial changes over time in the management of patients with rectal cancer, reflected in improved outcome. Much indirect evidence indicates that auditing matters, but without a control group it is not possible to draw firm conclusions regarding the possible impact of a quality control registry on faster shifts in time trends, decreased variability and improvements. Registry data were made available for reference. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.

  9. The role of the National Metrology laboratory for the improvement for the improvement of the individual monitoring in brazil

    International Nuclear Information System (INIS)

    Barbosa, R.A.; Baptista, L.A.M.M.; Silva, T.A. da

    1998-01-01

    Since 1980, the National Laboratory for ionising Radiation Metrology- LNMRI/IRD from Brazil has given support and performed quality control tests in the services that provide individual monitoring of external radiation in the country. Although the LNMRI/IRD has promoted intercomparisons and performance tests in all Brazilian individual monitoring systems, results showed that improvements in the their quality were too small, mainly due to a lack of a national policy and legal requirements for quality control them. In 1996, the Committee for Evaluation of External Individual Monitoring Services established a national policy for accreditation of individual monitoring services. In the new policy, the role of the LNMRI/IRD is mainly to verify the compliance of any individual monitoring system to the minimum accuracy requirements for the photon dose equivalent evaluation. Additionally, the LNMRI/IRD may verify any specific type-test to verify the results stated by the service itself. A new quality control program for all accredited services is also to be maintained by the LNMRI/IRD . This work shows and discusses the results of the role of the LNMRI/IRD under the old and the new accreditation policy for the systems used for the individual monitoring of photon beams

  10. Improvement of fire-tube boilers calculation methods by the numerical modeling of combustion processes and heat transfer in the combustion chamber

    Science.gov (United States)

    Komarov, I. I.; Rostova, D. M.; Vegera, A. N.

    2017-11-01

    This paper presents the results of study on determination of degree and nature of influence of operating conditions of burner units and flare geometric parameters on the heat transfer in a combustion chamber of the fire-tube boilers. Change in values of the outlet gas temperature, the radiant and convective specific heat flow rate with appropriate modification of an expansion angle and a flare length was determined using Ansys CFX software package. Difference between values of total heat flow and bulk temperature of gases at the flue tube outlet calculated using the known methods for thermal calculation and defined during the mathematical simulation was determined. Shortcomings of used calculation methods based on the results of a study conducted were identified and areas for their improvement were outlined.

  11. The National Network of State Perinatal Quality Collaboratives: A Growing Movement to Improve Maternal and Infant Health.

    Science.gov (United States)

    Henderson, Zsakeba T; Ernst, Kelly; Simpson, Kathleen Rice; Berns, Scott; Suchdev, Danielle B; Main, Elliott; McCaffrey, Martin; Lee, Karyn; Rouse, Tara Bristol; Olson, Christine K

    2018-03-01

    State Perinatal Quality Collaboratives (PQCs) are networks of multidisciplinary teams working to improve maternal and infant health outcomes. To address the shared needs across state PQCs and enable collaboration, Centers for Disease Control and Prevention (CDC), in partnership with March of Dimes and perinatal quality improvement experts from across the country, supported the development and launch of the National Network of Perinatal Quality Collaboratives (NNPQC). This process included assessing the status of PQCs in this country and identifying the needs and resources that would be most useful to support PQC development. National representatives from 48 states gathered for the first meeting of the NNPQC to share best practices for making measurable improvements in maternal and infant health. The number of state PQCs has grown considerably over the past decade, with an active PQC or a PQC in development in almost every state. However, PQCs have some common challenges that need to be addressed. After its successful launch, the NNPQC is positioned to ensure that every state PQC has access to key tools and resources that build capacity to actively improve maternal and infant health outcomes and healthcare quality.

  12. Dual-energy imaging method to improve the image quality and the accuracy of dose calculation for cone-beam computed tomography.

    Science.gov (United States)

    Men, Kuo; Dai, Jianrong; Chen, Xinyuan; Li, Minghui; Zhang, Ke; Huang, Peng

    2017-04-01

    To improve the image quality and accuracy of dose calculation for cone-beam computed tomography (CT) images through implementation of a dual-energy cone-beam computed tomography method (DE-CBCT), and evaluate the improvement quantitatively. Two sets of CBCT projections were acquired using the X-ray volumetric imaging (XVI) system on a Synergy (Elekta, Stockholm, Sweden) system with 120kV (high) and 70kV (low) X-rays, respectively. Then, the electron density relative to water (relative electron density (RED)) of each voxel was calculated using a projection-based dual-energy decomposition method. As a comparison, single-energy cone-beam computed tomography (SE-CBCT) was used to calculate RED with the Hounsfield unit-RED calibration curve generated by a CIRS phantom scan with identical imaging parameters. The imaging dose was measured with a dosimetry phantom. The image quality was evaluated quantitatively using a Catphan 503 phantom with the evaluation indices of the reproducibility of the RED values, high-contrast resolution (MTF 50% ), uniformity, and signal-to-noise ratio (SNR). Dose calculation of two simulated volumetric-modulated arc therapy plans using an Eclipse treatment-planning system (Varian Medical Systems, Palo Alto, CA, USA) was performed on an Alderson Rando Head and Neck (H&N) phantom and a Pelvis phantom. Fan-beam planning CT images for the H&N and Pelvis phantom were set as the reference. A global three-dimensional gamma analysis was used to compare dose distributions with the reference. The average gamma values for targets and OAR were analyzed with paired t-tests between DE-CBCT and SE-CBCT. In two scans (H&N scan and body scan), the imaging dose of DE-CBCT increased by 1.0% and decreased by 1.3%. It had a better reproducibility of the RED values (mean bias: 0.03 and 0.07) compared with SE-CBCT (mean bias: 0.13 and 0.16). It also improved the image uniformity (57.5% and 30.1%) and SNR (9.7% and 2.3%), but did not affect the MTF 50% . Gamma

  13. ON IMPROVEMENT OF METHODOLOGY FOR CALCULATING THE INDICATOR «AVERAGE WAGE»

    Directory of Open Access Journals (Sweden)

    Oksana V. Kuchmaeva

    2015-01-01

    Full Text Available The article describes the approaches to the calculation of the indicator of average wages in Russia with the use of several sources of information. The proposed method is based on data collected by Rosstat and the Pension Fund of the Russian Federation. The proposed approach allows capturing data on the wages of almost all groups of employees. Results of experimental calculations on the developed technique are present in this article.

  14. An improved algorithm for calculating cloud radiation

    International Nuclear Information System (INIS)

    Yuan Guibin; Sun Xiaogang; Dai Jingmin

    2005-01-01

    Clouds radiation characteristic is very important in cloud scene simulation, weather forecasting, pattern recognition, and other fields. In order to detect missiles against cloud backgrounds, to enhance the fidelity of simulation, it is critical to understand a cloud's thermal radiation model. Firstly, the definition of cloud layer infrared emittance is given. Secondly, the discrimination conditions of judging a pixel of focal plane on a satellite in daytime or night time are shown and equations are given. Radiance such as reflected solar radiance, solar scattering, diffuse solar radiance, solar and thermal sky shine, solar and thermal path radiance, cloud blackbody and background radiance are taken into account. Thirdly, the computing methods of background radiance for daytime and night time are given. Through simulations and comparison, this algorithm is proved to be an effective calculating algorithm for cloud radiation

  15. 48 CFR 1830.7002-2 - Cost of money calculations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Cost of money calculations. 1830.7002-2 Section 1830.7002-2 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE... Employed for Facilities in Use and For Facilities Under Construction 1830.7002-2 Cost of money calculations...

  16. [Role of "Health" National project in improvement of health parameters in working population].

    Science.gov (United States)

    Bykovskaia, T Iu

    2011-01-01

    The author analyzed results of "Health" National project accomplishment in Rostov region over 2006-2009. Findings are that quality of primary medical care has improved, material and technical basis of municipal health care institutions has progressed, salary of primary health care division specialists has increased. Over this period, infant mortality and mortality among able-bodied population in the region has decreased, birth rate has increased, coefficient of natural loss of population has reduced, life expectancy has increased.

  17. Are Improvements in Child Health Due to Increasing Status of Women in Developing Nations?

    Science.gov (United States)

    Heaton, Tim B

    2015-01-01

    This research tests the hypothesis that change over time in women's status leads to improvements in their children's health. Specifically, we examine whether change in resources and empowerment in mother's roles as biological mothers, caregivers, and providers and social contexts that promote the rights and representation of and investment in women are associated with better nutritional status and survival of young children. Analysis is based on a broad sample of countries (n = 28), with data at two or more points in time to enable examination of change. Key indicators of child health show improvement in the last 13 years in developing nations. Much of this improvement--90 percent of the increase in nutritional status and 47 percent of the reduction in mortality--is associated with improving status of women. Increased maternal education, control over reproduction, freedom from violence, access to health care, legislation and enforcement of women's rights, greater political representation, equality in the education system, and lower maternal mortality are improving children's health. These results imply that further advancement of women's position in society would be beneficial.

  18. Good Practices in Free-energy Calculations

    Science.gov (United States)

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher

    2013-01-01

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  19. Improvements in the error calculation of the action of a kicked beam

    CERN Document Server

    Sherman, Alexander Charles

    2013-01-01

    This report details a new calculation for the action performed in the optics measurement and correction software. The action of a kicked beam is used to calculate the dynamic aperture and detuning with amplitude. The current method of calculation has a large uncertainty due to the use of all BPMs (including those near interaction points and ones which are malfunctioning) and the model beta function. Instead, only good BPMs are kept and the measured beta function from phase is used, and significant decreases are seen in the relative uncertainty of the action.

  20. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  1. PROSPECTS OF MANAGEMENT ACCOUNTING AND COST CALCULATION

    OpenAIRE

    Marian TAICU

    2014-01-01

    Progress in improving production technology requires appropriate measures to achieve an efficient management of costs. This raises the need for continuous improvement of management accounting and cost calculation. Accounting information in general, and management accounting information in particular, have gained importance in the current economic conditions, which are characterized by risk and uncertainty. The future development of management accounting and cost calculation is essential to me...

  2. Studies of improved electron confinement in low density L-mode National Spherical Torus Experiment discharges

    International Nuclear Information System (INIS)

    Stutman, D.; Finkenthal, M.; Tritz, K.; Redi, M. H.; Kaye, S. M.; Bell, M. G.; Bell, R. E.; LeBlanc, B. P.; Hill, K. W.; Medley, S. S.; Menard, J. E.; Rewoldt, G.; Wang, W. X.; Synakowski, E. J.; Levinton, F.; Kubota, S.; Bourdelle, C.; Dorland, W.; The NSTX Team

    2006-01-01

    Electron transport is rapid in most National Spherical Torus Experiment, M. Ono et al., Nucl. Fusion 40, 557 (2000) beam heated plasmas. A regime of improved electron confinement is nevertheless observed in low density L-mode (''low-confinement'') discharges heated by early beam injection. Experiments were performed in this regime to study the role of the current profile on thermal transport. Variations in the magnetic shear profile were produced by changing the current ramp rate and onset of neutral beam heating. An increased electron temperature gradient and local minimum in the electron thermal diffusivity were observed at early times in plasmas with the fastest current ramp and earliest beam injection. In addition, an increased ion temperature gradient associated with a region of reduced ion transport is observed at slightly larger radii. Ultrasoft x-ray measurements of double-tearing magnetohydrodynamic activity, together with current diffusion calculations, point to the existence of negative magnetic shear in the core of these plasmas. Discharges with slower current ramp and delayed beam onset, which are estimated to have more monotonic q-profiles, do not exhibit regions of reduced transport. The results are discussed in the light of the initial linear microstability assessment of these plasmas, which suggests that the growth rate of all instabilities, including microtearing modes, can be reduced by negative or low magnetic shear in the temperature gradient region. Several puzzles arising from the present experiments are also highlighted

  3. Improved initial guess for minimum energy path calculations

    International Nuclear Information System (INIS)

    Smidstrup, Søren; Pedersen, Andreas; Stokbro, Kurt; Jónsson, Hannes

    2014-01-01

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used

  4. Increasing Use of Research Findings in Improving Evidence-Based Health Policy at the National Level

    Directory of Open Access Journals (Sweden)

    Meiwita Budiharsana

    2017-11-01

    Full Text Available In February 2016, the Minister of Health decided to increase the use of research findings in improving the quality of the national health policy and planning. The Ministry of Health has instructed the National Institute of Health Research and Development or NIHRD to play a stronger role of monitoring and evaluating all health programs, because “their opinion and research findings should be the basis for changes in national health policies and planning”. Compared to the past, the Ministry of Health has increased the research budget for evidence-based research tremendously. However, there is a gap between the information needs of program and policy-makers and the information offered by researchers. A close dialogue is needed between the users (program managers, policy makers and planners and the suppliers (researchers and evaluators to ensure that the evidence-based supplied by research is useful for programs, planning and health policy.

  5. Conjugate calculation of a film-cooled blade for improvement of the leading edge cooling configuration

    Directory of Open Access Journals (Sweden)

    Norbert Moritz

    2013-03-01

    Full Text Available Great efforts are still put into the design process of advanced film-cooling configurations. In particular, the vanes and blades of turbine front stages have to be cooled extensively for a safe operation. The conjugate calculation technique is used for the three-dimensional thermal load prediction of a film-cooled test blade of a modern gas turbine. Thus, it becomes possible to take into account the interaction of internal flows, external flow, and heat transfer without the prescription of heat transfer coefficients. The focus of the investigation is laid on the leading edge part of the blade. The numerical model consists of all internal flow passages and cooling hole rows at the leading edge. Furthermore, the radial gap flow is also part of the model. The comparison with thermal pyrometer measurements shows that with respect to regions with high thermal load a qualitatively and quantitatively good agreement of the conjugate results and the measurements can be found. In particular, the region in the vicinity of the mid-span section is exposed to a higher thermal load, which requires further improvement of the cooling arrangement. Altogether the achieved results demonstrate that the conjugate calculation technique is applicable for reasonable prediction of three-dimensional thermal load of complex cooling configurations for blades.

  6. Modifications of alpha processing software to improve calculation of limits for qualitative detection

    Energy Technology Data Exchange (ETDEWEB)

    Kirkpatrick, J.R.

    1997-01-01

    The work described in this report was done for the Bioassay Counting Laboratory (BCL) of the Center of Excellence for Bioassay of the Analytical Services Organization at the Oak Ridge Y-12 Plant. BCL takes urine and fecal samples and tests for alpha radiation. An automated system, supplied by Canberra Industries, counts the activities in the samples and processes the results. The Canberra system includes hardware and software. The managers of BCL want to improve the accuracy of the results they report to their final customers. The desired improvements are of particular interest to the managers of BCL because the levels of alpha-emitting radionuclides in samples measured at BCL are usually so low that a significant fraction of the measured signal is due to background and to the reagent material used to extract the radioactive nuclides from the samples. Also, the background and reagent signals show a significant level of random variation. The customers at BCL requested four major modifications of the software. The requested software changes have been made and tested. The present report is in two parts. The first part describes what the modifications were supposed to accomplish. The second part describes the changes on a line-by-line basis. The second part includes listings of the changed software and discusses possible steps to correct a particular error condition. Last, the second part describes the effect of truncation errors on the standard deviations calculated from samples whose signals are very nearly the same.

  7. Modifications of alpha processing software to improve calculation of limits for qualitative detection

    International Nuclear Information System (INIS)

    Kirkpatrick, J.R.

    1997-01-01

    The work described in this report was done for the Bioassay Counting Laboratory (BCL) of the Center of Excellence for Bioassay of the Analytical Services Organization at the Oak Ridge Y-12 Plant. BCL takes urine and fecal samples and tests for alpha radiation. An automated system, supplied by Canberra Industries, counts the activities in the samples and processes the results. The Canberra system includes hardware and software. The managers of BCL want to improve the accuracy of the results they report to their final customers. The desired improvements are of particular interest to the managers of BCL because the levels of alpha-emitting radionuclides in samples measured at BCL are usually so low that a significant fraction of the measured signal is due to background and to the reagent material used to extract the radioactive nuclides from the samples. Also, the background and reagent signals show a significant level of random variation. The customers at BCL requested four major modifications of the software. The requested software changes have been made and tested. The present report is in two parts. The first part describes what the modifications were supposed to accomplish. The second part describes the changes on a line-by-line basis. The second part includes listings of the changed software and discusses possible steps to correct a particular error condition. Last, the second part describes the effect of truncation errors on the standard deviations calculated from samples whose signals are very nearly the same

  8. An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points

    Science.gov (United States)

    Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun

    2014-05-01

    Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.

  9. SNS Sample Activation Calculator Flux Recommendations and Validation

    Energy Technology Data Exchange (ETDEWEB)

    McClanahan, Tucker C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Gallmeier, Franz X. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Lu, Wei [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS)

    2015-02-01

    The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.

  10. MICROX-2: an improved two-region flux spectrum code for the efficient calculation of group cross sections

    International Nuclear Information System (INIS)

    Mathews, D.; Koch, P.

    1979-12-01

    The MICROX-2 code is an improved version of the MICROX code. The improvements allow MICROX-2 to be used for the efficient and rigorous preparation of broad group neutron cross sections for poorly moderated systems such as fast breeder reactors in addition to the well moderated thermal reactors for which MICROX was designed. MICROX-2 is an integral transport theory code which solves the neutron slowing down and thermalization equations on a detailed energy grid for two-region lattice cells. The fluxes in the two regions are coupled by transport corrected collision probabilities. The inner region may include two different types of grains (particles). Neutron leakage effects are treated by performing B 1 slowing down and P 0 plus DB 2 thermalization calculations in each region. Cell averaged diffusion coefficients are prepared with the Benoist cell homogenization prescription

  11. Beam transport calculations for BARC-TIFR 14UD pelletron

    International Nuclear Information System (INIS)

    Prasad, K.G.

    1993-01-01

    The 14UD pelletron tandem accelerator installed at Tata Institute of Fundamental Research (TIFR) as a joint BARC-TIFR project, is supplied by National Electrostatic Corporation (NEC), U.S.A. To optimise the parameters of various elements along the beam path, it is essential to work out the beam optics of the entire system. There are various computer codes in use for such calculations. All these codes, except the detailed ray tracing programs, use matrix formulation. Thus each ion optical element is characterised in terms of a transport matrix, whose elements are assumed to be independent of particle trajectory. We have performed only the first order calculations, meaning thereby that no aberrations are included. Further, all calculations are carried out assuming ideal conditions like axial beam injection, perfectly aligned beam line elements, etc. The main code that has been employed in our calculations is based on the one at the Australian National University, Canberra, suitably modified for use with CYBER 170/730 computer at TIFR. However, codes at NEC and Stony Brook were also used for the checking the results. The results of calculations are given and discussed. (author). 2 figs

  12. Monitoring the Implementation of State Regulation of National Economic Security

    Directory of Open Access Journals (Sweden)

    Hubarieva Iryna O.

    2018-03-01

    Full Text Available The aim of the article is to improve the methodological tools for monitoring the implementation of state regulation of national economic security. The approaches to defining the essence of the concept of “national economic security” are generalized. Assessment of the level of national economic security is a key element in monitoring the implementation of state regulation in this area. Recommendations for improving the methodology for assessing national economic security, the calculation algorithm of which includes four interrelated components (economic, political, social, spiritual one, suggests using analysis methods (correlation and cluster analysis, and taxonomy, which allows to determine the level and disproportion of development, can serve as a basis for monitoring the implementation of state regulation of national economic security. Such an approach to assessing national economic security makes it possible to determine the place (rank that a country occupies in a totality of countries, the dynamics of changing ranks over a certain period of time, to identify problem components, and monitor the effectiveness of state regulation of the national economic security. It the course of the research it was determined that the economic sphere is the main problem component of ensuring the security of Ukraine’s economy. The analysis made it possible to identify the most problematic partial indicators in the economic sphere of Ukraine: economic globalization, uneven economic development, level of infrastructure, level of financial market development, level of economic instability, macroeconomic stability. These indicators have a stable negative dynamics and a downward trend, which requires an immediate intervention of state bodies to ensure the national economic security.

  13. Ethiopia's national strategy for improving water resources management

    International Nuclear Information System (INIS)

    Amha, M.

    2001-01-01

    Full text: Ethiopia's current approach to assessing and managing water resources, including geothermal, assigns very high priority to the use of isotope hydrology. Incorporation of this technology into government planning began with a few activities, in local groundwater assessment and in geothermal studies, kicked off by a 1993 National Isotope Hydrology Training Workshop that the IAEA helped arrange. The first results of isotope studies were useful in characterizing the Aluto Geothermal Field, where a 7.2 MW(e) power plant was later built with support from the UNDP and the EEC. And the Government is now hoping to introduce isotope techniques to improve utilization of the field. Isotope hydrology has successfully aided attempts to better understand ground water occurrence, flow and quality problems in arid regions of Ethiopia. These efforts are continuing through studies in the Dire Dawa, Mekelle and Afar regions. Rising water levels in Lake Beseka are threatening to submerge vital rail and highway links. Isotope hydrology made a unique contribution to understanding the surface and subsurface factors responsible, leading to an engineering plan for mitigating the problem. The Government has allocated substantial funding and construction work has begun. A similar success story is emerging at Awassa Lake, where isotope hydrology is proving a very useful complement to conventional techniques. Another promising application of isotope hydrology is taking place as part of the Akaki Groundwater Study near Addis Ababa. Preliminary isotopic results indicate that earlier conclusions based on conventional techniques may have to be revised. If so, there will be significant implications for the exploitation and management strategy of the resource. Based on these encouraging results, the Government is proceeding with the preparation of a project document for the Ethiopian Groundwater Resource Assessment Programme. With the assistance of the IAEA, the U.S. Geological Survey

  14. First-principles calculations of mobility

    Science.gov (United States)

    Krishnaswamy, Karthik

    First-principles calculations can be a powerful predictive tool for studying, modeling and understanding the fundamental scattering mechanisms impacting carrier transport in materials. In the past, calculations have provided important qualitative insights, but numerical accuracy has been limited due to computational challenges. In this talk, we will discuss some of the challenges involved in calculating electron-phonon scattering and carrier mobility, and outline approaches to overcome them. Topics will include the limitations of models for electron-phonon interaction, the importance of grid sampling, and the use of Gaussian smearing to replace energy-conserving delta functions. Using prototypical examples of oxides that are of technological importance-SrTiO3, BaSnO3, Ga2O3, and WO3-we will demonstrate computational approaches to overcome these challenges and improve the accuracy. One approach that leads to a distinct improvement in the accuracy is the use of analytic functions for the band dispersion, which allows for an exact solution of the energy-conserving delta function. For select cases, we also discuss direct quantitative comparisons with experimental results. The computational approaches and methodologies discussed in the talk are general and applicable to other materials, and greatly improve the numerical accuracy of the calculated transport properties, such as carrier mobility, conductivity and Seebeck coefficient. This work was performed in collaboration with B. Himmetoglu, Y. Kang, W. Wang, A. Janotti and C. G. Van de Walle, and supported by the LEAST Center, the ONR EXEDE MURI, and NSF.

  15. Atomic structure calculations of Mo XV-XL

    International Nuclear Information System (INIS)

    Kubo, Hirotaka; Sugie, Tatsuo; Shiho, Makoto; Suzuki, Yasuo; Ishii, Keishi; Maeda, Hikosuke.

    1986-06-01

    Energy levels and oscillator strengths were calculated for Mo XV - Mo XL. The computer program for atomic structure calculation, developed by Dr. Robert D. Cowan, Los Alamos National Laboratory, was used in the present work. The scaled energy parameters were empirically determined from the observed spectral data. We present wavelengths and transition probabilities of Mo XV-XL. Energy levels and spectral patterns are presented in figures that are useful for the identification of spectral lines. (author)

  16. Impact of neutron resonance treatments on reactor calculation

    International Nuclear Information System (INIS)

    Leszczynski, F.

    1988-01-01

    The neutron resonance treatment on reactor calculation is one of the not completely resolved problems of reactor theory. The calculation required on design, fuel management and accident analysis of nuclear reactors contains adjust coefficients and semi-empirical values introduced on the computer codes; these values are obtained comparing calculation results with experimental values and more exact calculation results. This is made when the characteristics of the analyzed system are such that this type of comparisons are possible. The impact that one fixed resonance treatment method have on the final evaluation of physics reactor parameters, reactivity, power distribution, etc., is useful to know. In this work, the differences between calculated parameters with two different methods of resonance treatment in cell calculations are shown. It is concluded that improvements on resonance treatment are necessary for growing the reliability on core calculations results. Finally, possible improvements, easy to implement in current computer codes, are presented. (Author) [es

  17. Evolution of calculation methods taking into account severe accidents

    International Nuclear Information System (INIS)

    L'Homme, A.; Courtaud, J.M.

    1990-12-01

    During the first decade of PWRs operation in France the calculation methods used for design and operation have improved very much. This paper gives a general analysis of the calculation methods evolution in parallel with the evolution of safety approach concerning PWRs. Then a comprehensive presentation of principal calculation tools is presented as applied during the past decade. An effort is done to predict the improvements in near future

  18. A review of national policies and strategies to improve quality of health care and patient safety: a case study from Lebanon and Jordan.

    Science.gov (United States)

    El-Jardali, Fadi; Fadlallah, Racha

    2017-08-16

    Improving quality of care and patient safety practices can strengthen health care delivery systems, improve health sector performance, and accelerate attainment of health-related Sustainability Development Goals. Although quality improvement is now prominent on the health policy agendas of governments in low- and middle-income countries (LMICs), including countries of the Eastern Mediterranean Region (EMR), progress to date has not been optimal. The objective of this study is to comprehensively review existing quality improvement and patient safety policies and strategies in two selected countries of the EMR (Lebanon and Jordan) to determine the extent to which these have been institutionalized within existing health systems. We used a mixed methods approach that combined documentation review, stakeholder surveys and key informant interviews. Existing quality improvement and patient safety initiatives were assessed across five components of an analytical framework for assessing health care quality and patient safety: health systems context; national policies and legislation; organizations and institutions; methods, techniques and tools; and health care infrastructure and resources. Both Lebanon and Jordan have made important progress in terms of increased attention to quality and accreditation in national health plans and strategies, licensing requirements for health care professionals and organizations (albeit to varying extents), and investments in health information systems. A key deficiency in both countries is the absence of an explicit national policy for quality improvement and patient safety across the health system. Instead, there is a spread of several (disjointed) pieces of legal measures and national plans leading to fragmentation and lack of clear articulation of responsibilities across the entire continuum of care. Moreover, both countries lack national sets of standardized and applicable quality indicators for performance measurement and benchmarking

  19. IRRIGATION SCHEDULING CALCULATOR (ISC TO IMPROVE WATER MANAGEMENT ON FIELD LEVEL IN EGYPT

    Directory of Open Access Journals (Sweden)

    Samiha Abou El-Fetouh Hamed Ouda

    2017-10-01

    Full Text Available The developed model is MS excel sheet called “Irrigation Scheduling Calculator, ISC”. The model requires to input daily weather data to calculate daily evapotranspiration using Penman-Monteith equation. The model calculates water depletion from the root zone to determine when to irrigate and how much water should be applied. The charge from irrigation pump is used to calculate how many hours should the farmer run the pump to deliver the needed amount of water. ISC model was used to developed irrigation schedule for wheat and maize planted in El-Gharbia governorate. The developed schedules were compared to the actual schedules for both crops. Furthermore, CropSyst model was calibrated for both crops and run using the developed schedules by ISC model. The simulation results indicated that the calculated irrigation amount by ISC model for wheat was lower than actual schedule by 6.0 mm. Furthermore, the simulated wheat productivity by CropSyst was higher than measured grain and biological by 2%. Similarly, the calculated applied irrigation amount by ISC model for maize was lower than actual schedule by 79.0 mm and the productivity was not changed.

  20. PROSPECTS OF MANAGEMENT ACCOUNTING AND COST CALCULATION

    Directory of Open Access Journals (Sweden)

    Marian ŢAICU

    2014-11-01

    Full Text Available Progress in improving production technology requires appropriate measures to achieve an efficient management of costs. This raises the need for continuous improvement of management accounting and cost calculation. Accounting information in general, and management accounting information in particular, have gained importance in the current economic conditions, which are characterized by risk and uncertainty. The future development of management accounting and cost calculation is essential to meet the information needs of management.

  1. Local expressions for one-loop calculations

    International Nuclear Information System (INIS)

    Wasson, D.A.; Koonin, S.E.

    1991-01-01

    We develop local expressions for the contributions of the short-wavelength vacuum modes to the one-loop vacuum energy. These expressions significantly improve the convergence properties of various ''brute-force'' calculational methods. They also provide a continuous series of approximations that interpolate between the brute-force calculations and the derivative expansion

  2. American National Standard: nuclear data sets for reactor design calculations

    International Nuclear Information System (INIS)

    1983-01-01

    This standard identifies and describes the specifications for developing, preparing, and documenting nuclear data sets to be used in reactor design calculations. The specifications include criteria for acceptance of evaluated nuclear data sets, criteria for processing evaluated data and preparation of processed continuous data and averaged data sets, and identification of specific evaluated, processed continuous, and averaged data sets which meet these criteria for specific reactor types

  3. Calculation device for amount of heavy element nuclide in reactor fuels and calculation method therefor

    International Nuclear Information System (INIS)

    Naka, Takafumi; Yamamoto, Munenari.

    1995-01-01

    When there are two or more origins of deuterium nuclides in reactor fuels, there are disposed a memory device for an amount of deuterium nuclides for every origin in a noted fuel segment at a certain time point, a device for calculating the amount of nuclides for every origin and current neutron fluxes in the noted fuel segment, and a device for separating and then displaying the amount of deuterium nuclides for every origin. Equations for combustion are dissolved for every origin of the deuterium nuclides based on the amount of the deuterium nuclides for every origin and neutron fluxes, to calculate the current amount of deuterium nuclides for every origin. The amount of deuterium nuclides originated from uranium is calculated ignoring α-decay of curium, while the amount of deuterium nuclides originated from plutonium is calculated ignoring the generation of plutonium formed from neptunium. Deuterium nuclides can be measured and controlled accurately for every origin of the reactor fuels. Even when nuclear fuel materials have two or more nationalities, the measurement and control thereof can be conducted for every country. (N.H.)

  4. Study on the Measurement and Calculation of Environmental Pollution Bearing Index of China’s Pig Scale

    Science.gov (United States)

    Leng, Bi-Bin; Gong, Jian; Zhang, Wen-bo; Ji, Xue-Qiang

    2017-11-01

    According to the environmental pollution caused by large-scale pig breeding, the SPSS statistical software and factor analysis method were used to calculate the environmental pollution bearing index of China’s breeding scale from 2006 to 2015. The results showed that with the increase of scale the density of live pig farming and the amount of fertilizer application in agricultural production increased. However, due to the improvement of national environmental awareness, industrial waste water discharge is greatly reduced. China's hog farming environmental pollution load index is rising.

  5. Final priority; technical assistance to improve state data capacity--National Technical Assistance Center to improve state capacity to accurately collect and report IDEA data. Final priority.

    Science.gov (United States)

    2013-05-20

    The Assistant Secretary for Special Education and Rehabilitative Services announces a priority under the Technical Assistance to Improve State Data Capacity program. The Assistant Secretary may use this priority for competitions in fiscal year (FY) 2013 and later years. We take this action to focus attention on an identified national need to provide technical assistance (TA) to States to improve their capacity to meet the data collection and reporting requirements of the Individuals with Disabilities Education Act (IDEA). We intend this priority to establish a TA center to improve State capacity to accurately collect and report IDEA data (Data Center).

  6. THE ACCOUNTING POSTEMPLOYMENT BENEFITS BASED ON ACTUARIAL CALCULATIONS

    Directory of Open Access Journals (Sweden)

    Anna CEBOTARI

    2017-11-01

    Full Text Available The accounting post-employment benefits, based on actuarial calculations, at present remains a subject studied in Moldova only theoretically. Applying actuarial calculations of accounting in fact denotes its character of evolving. Because national accounting standards have been adapted to international, which, in turn, require the valuation of assets and debts at fair value, there is a need to draw up exact calculations on which stands the theory of probability and mathematical statistics. One of the main objectives of accounting information is reflected in its financial situations and providing internal and external users of the entity. Hence, arises the need to reflect highly reliable information that can be provided by applying actuarial calculations.

  7. Range calculations using multigroup transport methods

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Robinson, M.T.; Dodds, H.L. Jr.

    1979-01-01

    Several aspects of radiation damage effects in fusion reactor neutron and ion irradiation environments are amenable to treatment by transport theory methods. In this paper, multigroup transport techniques are developed for the calculation of particle range distributions. These techniques are illustrated by analysis of Au-196 atoms recoiling from (n,2n) reactions with gold. The results of these calculations agree very well with range calculations performed with the atomistic code MARLOWE. Although some detail of the atomistic model is lost in the multigroup transport calculations, the improved computational speed should prove useful in the solution of fusion material design problems

  8. Rural development--national improvement.

    Science.gov (United States)

    Malhotra, R C

    1984-05-01

    Rural development should be viewed as the core of any viable strategy for national development in developing countries where an average 2/3 of the population live in rural areas. Rural development is multisectoral, including economic, sociopolitical, environmental, and cultural aspects of rural life. Initially, the focus is on the provision of basic minimum needs in food, shelter, clothing, health, and education, through optimum use and employment of all available resources, including human labor. The development goal is the total development of the human potential. The hierarchy of goals of development may be shown in the form of an inverted pyramid. At the base are basic minimum needs for subsistence whose fulfillment leads to a higher set of sociopolitical needs and ultimately to the goal of total developmentand the release of creative energies of every individual. If development, as outlined, were to benefit the majority of the people then they would have to participate in decision making which affects their lives. This would require that the people mobilize themselves in the people'ssector. The majority can equitably benefit from development only if they are mobilized effectively. Such mobilization requires raising the consciousness of the people concerning their rights and obligations. All development with the twin objectives of growth with equity could be reduced to restructuring the socioeconomic, and hence political relationships. Desinging and implementing an intergrated approach to rural development is the 1st and fundamental issue of rural development management. The commonly accepted goals and objectives of a target group oriented antipoverty development strategy include: higher productivity and growth in gross national product (GNP); equitable distribution of the benefits of development; provision of basic minimum needs for all; gainful employment; participation in development; self reliance or self sustaining growth and development; maintenance of

  9. Antifouling biocides in German marinas: Exposure assessment and calculation of national consumption and emission.

    Science.gov (United States)

    Daehne, Dagmar; Fürle, Constanze; Thomsen, Anja; Watermann, Burkard; Feibicke, Michael

    2017-09-01

    The authorization of biocidal antifouling products for leisure boats is the subject of the European Union Biocides Regulation 528/2012. National specifics may be regarded by the member states in their assessment of environmental risks. The aim of this survey was to collect corresponding data and to create a database for the environmental risk assessment of antifouling active substances in German surface waters. Water concentrations of current antifouling active substances and selected breakdown products were measured in a single-sampling campaign covering 50 marinas at inland and coastal areas. Increased levels were found for Zn, Cu, and cybutryne. For the latter, the maximum allowable concentration according to Directive 2013/39/EU was exceeded at 5 marinas. For Cu, local environmental quality standards were exceeded at 10 marinas. Base data on the total boat inventory in Germany were lacking until now. For that reason, a nationwide survey of mooring berths was conducted by use of aerial photos. About 206 000 mooring berths obviously used by boats with a potential antifouling application were counted. The blind spot of very small marinas was estimated at 20 000 berths. Seventy-one percent of berths were located at freshwater sites, illustrating the importance of navigable inland waterways for leisure boat activities and underlining the need for a customized exposure assessment in these areas. Moreover, the national consumption of all antifouling products for leisure boats was calculated. The total amount of 794 tonnes/annum (t/a) consisted of 179 t/a of inorganic Cu compounds, 19 t/a of organic cobiocides, and 49.5 t/a of Zn. With regard to weight proportion, 141 t/a Cu and 40 t/a Zn were consumed. Assuming an emission ratio of 50% during service life, 70.5 t/a of Cu amounted to 15% of all external sources for Cu release to German surface waters. These figures highlight the need for mitigation measures. Integr Environ Assess Manag 2017;13:892-905. © 2017 The

  10. American National Standard nuclear data sets for reactor design calculations

    International Nuclear Information System (INIS)

    Anon.

    1975-01-01

    A standard is presented which identifies and describes the specifications for developing, preparing, and documenting nuclear data sets to be used in reactor design calculations. The specifications include (a) criteria for acceptance of evaluated nuclear data sets, (b) criteria for processing evaluated data and preparation of processed continuous data and averaged data sets, and (c) identification of specific evaluated, processed continuous, and averaged data sets which meet these criteria for specific reactor types

  11. Improved diabetes management in Swedish schools: results from two national surveys.

    Science.gov (United States)

    Särnblad, Stefan; Åkesson, Karin; Fernström, Lillemor; Ilvered, Rosita; Forsander, Gun

    2017-09-01

    Support in diabetes self-care in school is essential to achieve optimal school performance and metabolic control. Swedish legislation regulating support to children with chronic diseases was strengthened 2009. To compare the results of a national survey conducted 2008 and 2015 measuring parents' and diabetes specialist teams' perceptions of support in school. All pediatric diabetes centers in Sweden were invited to participate in the 2015 study. In each center, families with a child being treated for T1DM and attending preschool class or compulsory school were eligible. The parents' and the diabetes teams' opinions were collected in two separate questionnaires. Forty-one out of 42 eligible diabetes centers participated and 568 parents answered the parental questionnaire in 2015. Metabolic control had improved since the 2008 survey (55.2 ± 10.6 mmol/mol, 7.2% ± 1.0%, in 2015 compared with 61.8 ± 12.4 mmol/mol, 7.8% ± 1.1% in 2008). The proportion of children with a designated staff member responsible for supporting the child's self-care increased from 43% to 59%, (P self-care in school in 2015 compared with 2008. More efforts are needed to implement the national legislation to achieve equal support in all Swedish schools. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Calculation of 14 MeV neutron transmission

    International Nuclear Information System (INIS)

    Vyrskij, M.Yu.; Dubinin, A.A.; Zhuravlev, V.I.; Isaev, N.V.; Klintsov, A.A.; Krivtsov, A.S.; Linge, I.I.; Panfilov, E.I.; Prit'mov, A.P.

    1979-01-01

    The possibility of using the 28 group constant system (28-GCS) for calculating the transport of neutrons with initial energy of 14 MeV in thermonuclear reactor blankets is studied. A blanket project suggested by the Oak Ridge National Laboratory is used as a test version to estimate applicability of the 28-GCS. Niobium is used in a blanket as a structural material. A mixture of lithium nuclides is used for tritium production. The results of blanket test calculation and the calculational results obtained using the 28-GCS from the UKNDL library are compared. The numerical 28-group calculation of blonket is carried out by means of the ROZ-6 and ROZ-9 codes but not by the Monte-Carlo method as compared with the test calculation. Time of the blanket calculation on the BESM-6 computer by means of the ROZ-9 code in 2P 5 approximation using the 28-GCS amounts to 10 min. It is noted that to create effective codes for the numerical blanket calculation different calculational grids are necessary for different energy grups. The calculations carried out have shown the possibility of using the 28-group library of cross sections for the numerical solution of the neutron transport equation in estimating analysis of blankets

  13. New Local, National and Regional Cereal Price Indices for Improved Identification of Food Insecurity

    Science.gov (United States)

    Brown, Molly E.; Tondel, Fabien; Thorne, Jennifer A.; Essam, Timothy; Mann, Bristol F.; Stabler, Blake; Eilerts, Gary

    2011-01-01

    Large price increases over a short time period can be indicative of a deteriorating food security situation. Food price indices developed by the United Nations Food and Agriculture Organization (FAO) are used to monitor food price trends at a global level, but largely reflect supply and demand conditions in export markets. However, reporting by the United States Agency for International Development (USAID)'s Famine Early Warning Systems Network (FEWS NET) indicates that staple cereal prices in many markets of the developing world, especially in surplus-producing areas, often have a delayed and variable response to international export market price trends. Here we present new price indices compiled for improved food security monitoring and assessment, and specifically for monitoring conditions of food access across diverse food insecure regions. We found that cereal price indices constructed using market prices within a food insecure region showed significant differences from the international cereals price, and had a variable price dispersion across markets within each marketshed. Using satellite-derived remote sensing information that estimates local production and the FAO Cereals Index as predictors, we were able to forecast movements of the local or national price indices in the remote, arid and semi-arid countries of the 38 countries examined. This work supports the need for improved decision-making about targeted aid and humanitarian relief, by providing earlier early warning of food security crises.

  14. Force Measurement Improvements to the National Transonic Facility Sidewall Model Support System

    Science.gov (United States)

    Goodliff, Scott L.; Balakrishna, Sundareswara; Butler, David; Cagle, C. Mark; Chan, David; Jones, Gregory S.; Milholen, William E., II

    2016-01-01

    The National Transonic Facility is a transonic pressurized cryogenic facility. The development of the high Reynolds number semi-span capability has advanced over the years to include transonic active flow control and powered testing using the sidewall model support system. While this system can be used in total temperatures down to -250Â F for conventional unpowered configurations, it is limited to temperatures above -60Â F when used with powered models that require the use of the high-pressure air delivery system. Thermal instabilities and non-repeatable mechanical arrangements revealed several data quality shortfalls by the force and moment measurement system. Recent modifications to the balance cavity recirculation system have improved the temperature stability of the balance and metric model-to-balance hardware. Changes to the mechanical assembly of the high-pressure air delivery system, particularly hardware that interfaces directly with the model and balance, have improved the repeatability of the force and moment measurement system. Drag comparisons with the high-pressure air system removed will also be presented in this paper.

  15. Battery of exercises for the improvement of the explosive force in the boxers of the National Team of Cuba

    Directory of Open Access Journals (Sweden)

    José Luis Hernández Hernández

    2017-12-01

    Full Text Available The development of explosive strength is one of the determining factors of the sport performance for the athletes of Boxing. The power and explosiveness of the blows executed during the combat, has been one of the problems that have been valued in the analyzes of the preparation of the boxers that make up the national preselection. The work aims to develop a battery of exercises to improve specific explosive strength in the boxers of the Cuban National Team participating in the 6th World Series of Boxing. The sample was formed by the 21 boxers who were prepared for this event. From the bibliographic search and previous practical experience, a system of exercises was developed for the development of explosive force during the preparation of this equipment. At the conclusion of the investigation, a significant increase in explosive strength was observed, reaching the necessary levels to improve the explosiveness in the execution of the blows, besides the improvement in the corporal composition.

  16. Postoperative radiotherapy for glioma: improved delineation of the clinical target volume using the geodesic distance calculation.

    Directory of Open Access Journals (Sweden)

    DanFang Yan

    Full Text Available OBJECTS: To introduce a new method for generating the clinical target volume (CTV from gross tumor volume (GTV using the geodesic distance calculation for glioma. METHODS: One glioblastoma patient was enrolled. The GTV and natural barriers were contoured on each slice of the computer tomography (CT simulation images. Then, a graphic processing unit based on a parallel Euclidean distance transform was used to generate the CTV considering natural barriers. Three-dimensional (3D visualization technique was applied to show the delineation results. Speed of operation and precision were compared between this new delineation method and the traditional method. RESULTS: In considering spatial barriers, the shortest distance from the point sheltered from these barriers equals the sum of the distance along the shortest path between the two points; this consists of several segments and evades the spatial barriers, rather than being the direct Euclidean distance between two points. The CTV was generated irregularly rather than as a spherical shape. The time required to generate the CTV was greatly reduced. Moreover, this new method improved inter- and intra-observer variability in defining the CTV. CONCLUSIONS: Compared with the traditional CTV delineation, this new method using geodesic distance calculation not only greatly shortens the time to modify the CTV, but also has better reproducibility.

  17. Lake and bulk sampling chemistry, NADP, and IMPROVE air quality data analysis on the Bridger-Teton National Forest (USFS Region 4)

    Science.gov (United States)

    Jill Grenon; Terry Svalberg; Ted Porwoll; Mark Story

    2010-01-01

    Air quality monitoring data from several programs in and around the Bridger-Teton (B-T) National Forest - National Atmospheric Deposition Program (NADP), longterm lake monitoring, long-term bulk precipitation monitoring (both snow and rain), and Interagency Monitoring of Protected Visual Environments (IMPROVE) - were analyzed in this report. Trends were analyzed using...

  18. Integrated care for patients with a stroke in the Netherlands: results and experiences from a national Breakthrough Collaborative Improvement project

    Directory of Open Access Journals (Sweden)

    M.M.N. Minkman

    2005-03-01

    Full Text Available Purpose: This article considers the question if measurable improvements are achieved in the quality of care in stroke services by using a Breakthrough collaborative quality improvement model. Context of case: Despite the availability of explicit criteria, evidence based guidelines, national protocols and examples of best practices; stroke care in the Netherlands did not improve substantially yet. For that reason a national collaborative started in 2002 to improve integrated stroke care in 23 self selected stroke services. Data sources: Characteristics of sites, teams, aims and changes were assessed by using a questionnaire and monthly self-reports of teams. Progress in achieving significant quality improvement has been assessed on a five point Likert scale (IHI score. Case description: The stroke services (n=23 formed multidisciplinary teams, which worked together in a collaborative based on the IHI Breakthrough Series Model. Teams received instruction in quality improvement, reviewed self reported performance data, identified bottlenecks and improvement goals, and implemented “potentially better practices” based on criteria from the Edisse study, evidence based guidelines, own ideas and expert opinion. Conclusion and discussion: Quality of care has been improved in most participating stroke services. Eighty-seven percent of the teams have improved their care significantly on at least one topic. About 34% of the teams have achieved significant improvement on all aims within the time frame of the project. The project has contributed to the further development and spread of integrated stroke care in the Netherlands.

  19. Impedance calculations for the improved SLC damping rings

    International Nuclear Information System (INIS)

    Bane, K.L.F.; Ng, C.K.

    1993-04-01

    A longitudinal, single bunch instability is observed in the damping rings of the Stanford Linear Collider (SLC). Beyond a threshold bunch population of 3 x 10 10 particles the bunch energy spread increases and a ''saw-tooth'' variation in bunch length and synchronous phase as functions of time is observed. Although the relative amplitude of the saw-tooth variation is small-only on the order of 10% -- the resulting unpredictability of the beam properties in the rest of the SLC accelerator makes it difficult, if not impossible, to operate the machine above the threshold current. An additional problem at higher currents is that the bunch length is greatly increased. When the bunch is very long in the ring it becomes difficult or impossible to properly compress it after extraction. We want to solve both of these problems so that the SLC can run at higher currents to increase the luminosity. In order to solve these problems the vacuum chambers of both damping rings are being rebuilt with the aim of reducing their impedance. According to previous calculations the impedance the SLC damping rings is dominated by the many small discontinuities that are located in the so-called QD and QF vacuum chamber segments -- elements such as transitions, masks, bellows-that are inductive to the beam, Since these earlier calculations were performed the bellows of the QD segments have been sleeved, yielding a factor of 2 increase in the instability threshold. In this paper we begin by discussing the gains that might be achieved if we can reduce the impedance of the rings even further. Then we estimate the effect on the total impedance of the actual design changes that are being proposed. Three important elements -- the bend-to-quad transitions, the distributed ion pump slots, and the beam position monitor (BPM) electrodes are fully 3-dimensional and will be studied using T3 of the MAFIA computer programs

  20. RP-10: commissioning. Reproduction by physical experiences calculation

    International Nuclear Information System (INIS)

    Higa, Manabu; Madariaga, M.R.

    1990-01-01

    This work presents the neutronic calculation results, most of which were carried out after such experiences, to verify the calculation methodology developed at the Analysis and Calculation Department of the National Atomic Energy Commission (CNEA). The results obtained were satisfactory, proving that the calculation methodology used is adequate for the design of this type of reactors. The only important disagreement is to evaluate the reactivity excess and cut reactivity, but this responds to a criterion difference and/or that of definition for these parameters. The positions of criticality with errors lower than 100 pcm were predicted. The differential and integral reactivities for the calibration of bars, as well as the flux distribution, are reproduced in a reasonable degree in relation to differences inferior to 10%. (Author) [es

  1. Quality improvement education to improve performance on ulcerative colitis quality measures and care processes aligned with National Quality Strategy priorities.

    Science.gov (United States)

    Greene, Laurence; Moreo, Kathleen

    2015-01-01

    Studies on inflammatory bowel disease (IBD) have reported suboptimal approaches to patient care. In the United States, the findings have motivated leading gastroenterology organizations to call for initiatives that support clinicians in aligning their practices with quality measures for IBD and priorities of the National Quality Strategy (NQS). We designed and implemented a quality improvement (QI) education program on ulcerative colitis in which patient charts were audited for 30 gastroenterologists before (n = 300 charts) and after (n = 290 charts) they participated in QI-focused educational activities. Charts were audited for nine measures, selected for their alignment with four NQS priorities: making care safer, ensuring patient engagement, promoting communication, and promoting effective treatment practices. Four of the measures, including guideline-directed vaccinations and assessments of disease type and activity, were part of the CMS Physician Quality Reporting System (PQRS). The other five measures involved counseling patients on various topics in ulcerative colitis management, documentation of side effects, assessment of adherence status, and simplification of dosing. The gastroenterologists also completed baseline and post-education surveys designed to assess qualitative outcomes. One of the educational interventions was a private audit feedback session conducted for each gastroenterologist. The sessions were designed to support participants in identifying measures reflecting suboptimal care quality and developing action plans for improvement. In continuous improvement cycles, follow-up interventions included QI tools and educational monographs. Across the nine chart variables, post-education improvements ranged from 0% to 48%, with a mean improvement of 15.9%. Survey findings revealed improvements in self-reported understanding of quality measures and intentions to apply them to practice, and lower rates of perceived significant barriers to high

  2. Calculation of radon concentration in water by toluene extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Masaaki [Tokyo Metropolitan Isotope Research Center (Japan)

    1997-02-01

    Noguchi method and Horiuchi method have been used as the calculation method of radon concentration in water. Both methods have two problems in the original, that is, the concentration calculated is changed by the extraction temperature depend on the incorrect solubility data and the concentration calculated are smaller than the correct values, because the radon calculation equation does not true to the gas-liquid equilibrium theory. However, the two problems are solved by improving the radon equation. I presented the Noguchi-Saito equation and the constant B of Horiuchi-Saito equation. The calculating results by the improved method showed about 10% of error. (S.Y.)

  3. Generalized diffusion theory for calculating the neutron transport scalar flux

    International Nuclear Information System (INIS)

    Alcouffe, R.E.

    1975-01-01

    A generalization of the neutron diffusion equation is introduced, the solution of which is an accurate approximation to the transport scalar flux. In this generalization the auxiliary transport calculations of the system of interest are utilized to compute an accurate, pointwise diffusion coefficient. A procedure is specified to generate and improve this auxiliary information in a systematic way, leading to improvement in the calculated diffusion scalar flux. This improvement is shown to be contingent upon satisfying the condition of positive calculated-diffusion coefficients, and an algorithm that ensures this positivity is presented. The generalized diffusion theory is also shown to be compatible with conventional diffusion theory in the sense that the same methods and codes can be used to calculate a solution for both. The accuracy of the method compared to reference S/sub N/ transport calculations is demonstrated for a wide variety of examples. (U.S.)

  4. E-prescription as a tool for improving services and the financial viability of healthcare systems: the case of the Greek national e-prescription system.

    Science.gov (United States)

    Pangalos, G; Sfyroeras, V; Pagkalos, I

    2014-01-01

    E-prescription systems can help improve patient service, safety and quality of care. They can also help achieve better compliance for the patients and better alignment with the guidelines for the practitioners. The recently implemented national e-prescription system in Greece already covers approximately 85% of all prescriptions prescribed in Greece today (approximately 5.5 million per month). The system has not only contributed already in significant changes towards improving services and better monitoring and planning of public health, but also substantially helped to contain unnecessary expenditure related to medication use and improve transparency and administrative control. Such issues have gained increasing importance not only for Greece but also for many other national healthcare systems that have to cope with the continuous rise of medication expenditure. Our implementation has, therefore, shown that besides their importance for improving services, national e-prescription systems can also provide a valuable tool for better utilisation of resources and for containing unnecessary healthcare costs, thus contributing to the improvement of the financial stability and viability of the overall healthcare system.

  5. Improved forced impulse method calculations of single and double ionization of helium by collision with high-energy protons and antiprotons

    International Nuclear Information System (INIS)

    Ford, A.L.; Reading, J.F.

    1994-01-01

    Our previous forced impulse method calculations of single and double ionization of helium by protons and antiprotons have been improved by including d orbitals in the target centre basis. The calculations are in good agreement with experimental measurements of the ratio R of double to single ionization, without the 1.35 scaling factor we applied to our previous results. We also compare the separate single and double ionization cross sections to experiment and find good agreement. Experimental cross sections differential in projectile scattering angle at large angle (greater than 2.5 mrad) are compared to our impact parameter dependent ionization probabilities at small impact parameter, for the double to single ratio. The agreement is good, except at the lowest energy we have considered, 0.3 eV. (Author)

  6. Breit–Pauli atomic structure calculations for Fe XI

    International Nuclear Information System (INIS)

    Aggarwal, Sunny; Singh, Jagjit; Mohan, Man

    2013-01-01

    Energy levels, oscillator strengths, and transition probabilities are calculated for the lowest-lying 165 energy levels of Fe XI using configuration-interaction wavefunctions. The calculations include all the major correlation effects. Relativistic effects are included in the Breit–Pauli approximation by adding mass-correction, Darwin, and spin–orbit interaction terms to the non-relativistic Hamiltonian. For comparison with the calculated ab initio energy levels, we have also calculated the energy levels by using the fully relativistic multiconfiguration Dirac–Fock method. The calculated results are in close agreement with the National Institute of Standards and Technology compilation and other available results. New results are predicted for many of the levels belonging to the 3s3p 4 3d and 3s3p 3 3d 2 configurations, which are very important in astrophysics, relevant, for example, to the recent observations by the Hinode spacecraft. We expect that our extensive calculations will be useful to experimentalists in identifying the fine structure levels in their future work

  7. Replica Exchange Gaussian Accelerated Molecular Dynamics: Improved Enhanced Sampling and Free Energy Calculation.

    Science.gov (United States)

    Huang, Yu-Ming M; McCammon, J Andrew; Miao, Yinglong

    2018-04-10

    Through adding a harmonic boost potential to smooth the system potential energy surface, Gaussian accelerated molecular dynamics (GaMD) provides enhanced sampling and free energy calculation of biomolecules without the need of predefined reaction coordinates. This work continues to improve the acceleration power and energy reweighting of the GaMD by combining the GaMD with replica exchange algorithms. Two versions of replica exchange GaMD (rex-GaMD) are presented: force constant rex-GaMD and threshold energy rex-GaMD. During simulations of force constant rex-GaMD, the boost potential can be exchanged between replicas of different harmonic force constants with fixed threshold energy. However, the algorithm of threshold energy rex-GaMD tends to switch the threshold energy between lower and upper bounds for generating different levels of boost potential. Testing simulations on three model systems, including the alanine dipeptide, chignolin, and HIV protease, demonstrate that through continuous exchanges of the boost potential, the rex-GaMD simulations not only enhance the conformational transitions of the systems but also narrow down the distribution width of the applied boost potential for accurate energetic reweighting to recover biomolecular free energy profiles.

  8. Are data from national quality registries used in quality improvement at Swedish hospital clinics?

    Science.gov (United States)

    Fredriksson, Mio; Halford, Christina; Eldh, Ann Catrine; Dahlström, Tobias; Vengberg, Sofie; Wallin, Lars; Winblad, Ulrika

    2017-11-01

    To investigate the use of data from national quality registries (NQRs) in local quality improvement as well as purported key factors for effective clinical use in Sweden. Comparative descriptive: a web survey of all Swedish hospitals participating in three NQRs with different levels of development (certification level). Heads of the clinics and physician(s) at clinics participating in the Swedish Stroke Register (Riksstroke), the Swedish National Registry of Gallstone Surgery and Endoscopic Retrograde Cholangiopancreatography (GallRiks) and the Swedish Lung Cancer Registry (NLCR). Individual and unit level use of NQRs in local quality improvement, and perceptions on data quality, organizational conditions and user motivation. Riksstroke data were reported as most extensively used at individual and unit levels (x̅ 17.97 of 24 and x̅ 27.06 of 35). Data quality and usefulness was considered high for the two most developed NQRs (x̅ 19.86 for Riksstroke and x̅ 19.89 for GallRiks of 25). Organizational conditions were estimated at the same level for Riksstroke and GallRiks (x̅ 12.90 and x̅ 13.28 of 20) while the least developed registry, the NLCR, had lower estimates (x̅ 10.32). In Riksstroke, the managers requested registry data more often (x̅ 15.17 of 20). While there were significant differences between registries in key factors such as management interest, use of NQR data in local quality improvement seems rather prevalent, at least for Riksstroke. The link between the registry's level of development and factors important for routinization of innovations such as NQRs needs investigation. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  9. Lessons learned from the Fukushima accident to improve the performance of the national nuclear preparedness system

    International Nuclear Information System (INIS)

    Dewi Apriliani

    2013-01-01

    A study of emergency response failure in the early phase of a nuclear accident in Fukushima, Japan has conducted. This study aimed to obtain lesson learned from the problems and constraints that exist at the time of the Fukushima emergency response. This lesson learned will be adjusted to the situation, conditions and problems in nuclear preparedness systems in Indonesia, so that it can obtain the necessary recommendations to improve the performance of SKNN (National Nuclear Emergency Preparedness System). Recommendations include: improvements in coordination and information systems, including early warning systems and dissemination of information; improvements in the preparation of emergency plans/contingency plan, which includes an integrated disaster management; improvement in the development of disaster management practice/field exercise, by extending the scenario and integrate it with nuclear disaster, chemical, biological, and acts of terrorism; and improvement in public education of nuclear emergency preparedness and also improvement in management for dissemination of information to the public and the mass media. These improvements need to be done as part of efforts in preparing a reliable nuclear emergency preparedness in order to support nuclear power plant development plan. (author)

  10. Using stereo satellite imagery to account for ablation, entrainment, and compaction in volume calculations for rock avalanches on Glaciers: Application to the 2016 Lamplugh Rock Avalanche in Glacier Bay National Park, Alaska

    Science.gov (United States)

    Bessette-Kirton, Erin; Coe, Jeffrey A.; Zhou, Wendy

    2018-01-01

    The use of preevent and postevent digital elevation models (DEMs) to estimate the volume of rock avalanches on glaciers is complicated by ablation of ice before and after the rock avalanche, scour of material during rock avalanche emplacement, and postevent ablation and compaction of the rock avalanche deposit. We present a model to account for these processes in volume estimates of rock avalanches on glaciers. We applied our model by calculating the volume of the 28 June 2016 Lamplugh rock avalanche in Glacier Bay National Park, Alaska. We derived preevent and postevent 2‐m resolution DEMs from WorldView satellite stereo imagery. Using data from DEM differencing, we reconstructed the rock avalanche and adjacent surfaces at the time of occurrence by accounting for elevation changes due to ablation and scour of the ice surface, and postevent deposit changes. We accounted for uncertainties in our DEMs through precise coregistration and an assessment of relative elevation accuracy in bedrock control areas. The rock avalanche initially displaced 51.7 ± 1.5 Mm3 of intact rock and then scoured and entrained 13.2 ± 2.2 Mm3 of snow and ice during emplacement. We calculated the total deposit volume to be 69.9 ± 7.9 Mm3. Volume estimates that did not account for topographic changes due to ablation, scour, and compaction underestimated the deposit volume by 31.0–46.8 Mm3. Our model provides an improved framework for estimating uncertainties affecting rock avalanche volume measurements in glacial environments. These improvements can contribute to advances in the understanding of rock avalanche hazards and dynamics.

  11. Improved scFv Anti-HIV-1 p17 Binding Affinity Guided from the Theoretical Calculation of Pairwise Decomposition Energies and Computational Alanine Scanning

    Directory of Open Access Journals (Sweden)

    Panthip Tue-ngeun

    2013-01-01

    Full Text Available Computational approaches have been used to evaluate and define important residues for protein-protein interactions, especially antigen-antibody complexes. In our previous study, pairwise decomposition of residue interaction energies of single chain Fv with HIV-1 p17 epitope variants has indicated the key specific residues in the complementary determining regions (CDRs of scFv anti-p17. In this present investigation in order to determine whether a specific side chain group of residue in CDRs plays an important role in bioactivity, computational alanine scanning has been applied. Molecular dynamics simulations were done with several complexes of original scFv anti-p17 and scFv anti-p17mutants with HIV-1 p17 epitope variants with a production run up to 10 ns. With the combination of pairwise decomposition residue interaction and alanine scanning calculations, the point mutation has been initially selected at the position MET100 to improve the residue binding affinity. The calculated docking interaction energy between a single mutation from methionine to either arginine or glycine has shown the improved binding affinity, contributed from the electrostatic interaction with the negative favorably interaction energy, compared to the wild type. Theoretical calculations agreed well with the results from the peptide ELISA results.

  12. Improved perturbative calculations in field theory; Calculation of the mass spectrum and constraints on the supersymmetric standard model; Calculs perturbatifs variationnellement ameliores en theorie des champs; Calcul du spectre et contraintes sur le modele supersymetrique standard

    Energy Technology Data Exchange (ETDEWEB)

    Kneur, J.L

    2006-06-15

    This document is divided into 2 parts. The first part describes a particular re-summation technique of perturbative series that can give a non-perturbative results in some cases. We detail some applications in field theory and in condensed matter like the calculation of the effective temperature of Bose-Einstein condensates. The second part deals with the minimal supersymmetric standard model. We present an accurate calculation of the mass spectrum of supersymmetric particles, a calculation of the relic density of supersymmetric black matter, and the constraints that we can infer from models.

  13. Performing three-dimensional neutral particle transport calculations on tera scale computers

    International Nuclear Information System (INIS)

    Woodward, C.S.; Brown, P.N.; Chang, B.; Dorr, M.R.; Hanebutte, U.R.

    1999-01-01

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL)

  14. Mach Stability Improvements Using an Existing Second Throat Capability at the National Transonic Facility

    Science.gov (United States)

    Chan, David T.; Balakrishna, Sundareswara; Walker, Eric L.; Goodliff, Scott L.

    2015-01-01

    Recent data quality improvements at the National Transonic Facility have an intended goal of reducing the Mach number variation in a data point to within plus or minus 0.0005, with the ultimate goal of reducing the data repeatability of the drag coefficient for full-span subsonic transport models at transonic speeds to within half a drag count. This paper will discuss the Mach stability improvements achieved through the use of an existing second throat capability at the NTF to create a minimum area at the end of the test section. These improvements were demonstrated using both the NASA Common Research Model and the NTF Pathfinder-I model in recent experiments. Sonic conditions at the throat were verified using sidewall static pressure data. The Mach variation levels from both experiments in the baseline tunnel configuration and the choked tunnel configuration will be presented and the correlation between Mach number and drag will also be examined. Finally, a brief discussion is given on the consequences of using the second throat in its location at the end of the test section.

  15. Early Surgical Site Infection Following Tissue Expander Breast Reconstruction with or without Acellular Dermal Matrix: National Benchmarking Using National Surgical Quality Improvement Program

    Directory of Open Access Journals (Sweden)

    Sebastian Winocour

    2015-03-01

    Full Text Available BackgroundSurgical site infections (SSIs result in significant patient morbidity following immediate tissue expander breast reconstruction (ITEBR. This study determined a single institution's 30-day SSI rate and benchmarked it against that among national institutions participating in the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP.MethodsWomen who underwent ITEBR with/without acellular dermal matrix (ADM were identified using the ACS-NSQIP database between 2005 and 2011. Patient characteristics associated with the 30-day SSI rate were determined, and differences in rates between our institution and the national database were assessed.Results12,163 patients underwent ITEBR, including 263 at our institution. SSIs occurred in 416 (3.4% patients nationwide excluding our institution, with lower rates observed at our institution (1.9%. Nationwide, SSIs were significantly more common in ITEBR patients with ADM (4.5% compared to non-ADM patients (3.2%, P=0.005, and this trend was observed at our institution (2.1% vs. 1.6%, P=1.00. A multivariable analysis of all institutions identified age ≥50 years (odds ratio [OR], 1.4; confidence interval [CI], 1.1-1.7, body mass index ≥30 kg/m2 vs. 4.25 hours (OR, 1.9; CI, 1.5-2.4 as risk factors for SSIs. Our institutional SSI rate was lower than the nationwide rate (OR, 0.4; CI, 0.2-1.1, although this difference was not statistically significant (P=0.07.ConclusionsThe 30-day SSI rate at our institution in patients who underwent ITEBR was lower than the nation. SSIs occurred more frequently in procedures involving ADM both nationally and at our institution.

  16. Improvement of the Work Environment and Work-Related Stress: A Cross-Sectional Multilevel Study of a Nationally Representative Sample of Japanese Workers.

    Science.gov (United States)

    Watanabe, Kazuhiro; Tabuchi, Takahiro; Kawakami, Norito

    2017-03-01

    This cross-sectional multilevel study aimed to investigate the relationship between improvement of the work environment and work-related stress in a nationally representative sample in Japan. The study was based on a national survey that randomly sampled 1745 worksites and 17,500 nested employees. The survey asked the worksites whether improvements of the work environment were conducted; and it asked the employees to report the number of work-related stresses they experienced. Multilevel multinominal logistic and linear regression analyses were conducted. Improvement of the work environment was not significantly associated with any level of work-related stress. Among men, it was significantly and negatively associated with the severe level of work-related stress. The association was not significant among women. Improvements to work environments may be associated with reduced work-related stress among men nationwide in Japan.

  17. Evaluating the implementation of a national clinical programme for diabetes to standardise and improve services: a realist evaluation protocol.

    Science.gov (United States)

    McHugh, S; Tracey, M L; Riordan, F; O'Neill, K; Mays, N; Kearney, P M

    2016-07-28

    Over the last three decades in response to the growing burden of diabetes, countries worldwide have developed national and regional multifaceted programmes to improve the monitoring and management of diabetes and to enhance the coordination of care within and across settings. In Ireland in 2010, against a backdrop of limited dedicated strategic planning and engrained variation in the type and level of diabetes care, a national programme was established to standardise and improve care for people with diabetes in Ireland, known as the National Diabetes Programme (NDP). The NDP comprises a range of organisational and service delivery changes to support evidence-based practices and policies. This realist evaluation protocol sets out the approach that will be used to identify and explain which aspects of the programme are working, for whom and in what circumstances to produce the outcomes intended. This mixed method realist evaluation will develop theories about the relationship between the context, mechanisms and outcomes of the diabetes programme. In stage 1, to identify the official programme theories, documentary analysis and qualitative interviews were conducted with national stakeholders involved in the design, development and management of the programme. In stage 2, as part of a multiple case study design with one case per administrative region in the health system, qualitative interviews are being conducted with frontline staff and service users to explore their responses to, and reasoning about, the programme's resources (mechanisms). Finally, administrative data will be used to examine intermediate implementation outcomes such as service uptake, acceptability, and fidelity to models of care. This evaluation is using the principles of realist evaluation to examine the implementation of a national programme to standardise and improve services for people with diabetes in Ireland. The concurrence of implementation and evaluation has enabled us to produce formative

  18. LIGA 2. An improved computer code for the calculation of the local individual submersion dose in off-air plumes from nuclear facilities

    International Nuclear Information System (INIS)

    Rohloff, F.; Brunen, E.

    1981-08-01

    A model is presented to calculate the γ-submersion dose of persons which are exposed to off-air plumes. This model integrates the dose contributions of the spacial volume elements, taking into account the wheather dependent extension of the plume as well as γ-absorption and scattering in air. For data evaluation an essentially improved code LIGA II has been developed, leading to a higher accuracy due to an adequate application of Gauss integrations in MACRO-technique. The short-term propagation factors are calculated for a grid distance of 10-160 km with a logarithmic scale and for a 5 degree angular grid. As is shown by a sensitivity analysis, the mean values of the short-term propagation factors within a sector can be obtained by a simple Simpson-integration. These calculations have been performed explicitly for 10 degree and 30 degree sectors. (orig.) [de

  19. Calculation-measurement comparison for control rods reactivity in RA-3 nuclear reactor

    International Nuclear Information System (INIS)

    Estryk, Guillermo; Gomez, Angel

    2002-01-01

    The RA-3 Nuclear Reactor of the Atomic Energy National Commission from Argentina, begun working with high enrichment fuel elements in 1967, and turned to low enrichment by 1990. During 1999 it was found out that several fuel elements had problems, so more than 50 % of them had to be removed from the core. Because of this, it was planned to go from core 93 to core 94 with special care from nuclear safety point of view. Core 94 was preceded by other five, T-1 to T-5, only as transitory ones. The care implied several nuclear parameters measurements: core reactivity excess, calibration of control rods, etc. Calculations were performed afterwards to simulate those measurements using the neutron diffusion code PUMA. The comparison shows a good agreement for more than 80% of the cases with differences lower than 10% in reactivity. The greatest differences were found in the last part of the control rods calibration and a better calculation of cell constants is planned to be done in order to improve the adjustment. (author)

  20. Calculating the Contribution Rate of Intelligent Transportation System in Improving Urban Traffic Smooth Based on Advanced DID Model

    Directory of Open Access Journals (Sweden)

    Ming-wei Li

    2015-01-01

    Full Text Available Recent years have witnessed the rapid development of intelligent transportation system around the world, which helps to relieve urban traffic congestion problems. For instance, many mega-cities in China have devoted a large amount of money and resources to the development of intelligent transportation system. This poses an intriguing and important issue: how to measure and quantify the contribution of intelligent transportation system to the urban city, which is still a puzzle. This paper proposes a matching difference-in-difference model to calculate the contribution rate of intelligent transportation system on traffic smoothness. Within the model, the main effect indicators of traffic smoothness are first identified, and then the evaluation index system is built, and finally the ideas of the matching pool are introduced. The proposed model is illustrated in Guangzhou, China (capital city of Guangdong province. The results show that introduction of ITS contributes 9.25% to the improvement of traffic smooth in Guangzhou. Also, the research explains the working mechanism of how ITS improves urban traffic smooth. Eventually, some strategy recommendations are put forward to improve urban traffic smooth.

  1. Preliminary Calculation of the Indicators of Sustainable Development for National Radioactive Waste Management Programs

    International Nuclear Information System (INIS)

    Cheong, Jae Hak; Park, Won Jae

    2003-01-01

    As a follow up to the Agenda 21's policy statement for safe management of radioactive waste adopted at Rio Conference held in 1992, the UN invited the IAEA to develop and implement indicators of sustainable development for the management of radioactive waste. The IAEA finalized the indicators in 2002, and is planning to calculate the member states' values of indicators in connection with operation of its Net-Enabled Waste Management Database system. In this paper, the basis for introducing the indicators into the radioactive waste management was analyzed, and calculation methodology and standard assessment procedure were simply depicted. In addition, a series of innate limitations in calculation and comparison of the indicators was analyzed. According to the proposed standard procedure, the indicators for a few major countries including Korea were calculated and compared, by use of each country's radioactive waste management framework and its practices. In addition, a series of measures increasing the values of the indicators was derived so as to enhance the sustainability of domestic radioactive waste management program.

  2. Reactor calculation benchmark PCA blind test results

    International Nuclear Information System (INIS)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables

  3. Reactor calculation benchmark PCA blind test results

    Energy Technology Data Exchange (ETDEWEB)

    Kam, F.B.K.; Stallmann, F.W.

    1980-01-01

    Further improvement in calculational procedures or a combination of calculations and measurements is necessary to attain 10 to 15% (1 sigma) accuracy for neutron exposure parameters (flux greater than 0.1 MeV, flux greater than 1.0 MeV, and dpa). The calculational modeling of power reactors should be benchmarked in an actual LWR plant to provide final uncertainty estimates for end-of-life predictions and limitations for plant operations. 26 references, 14 figures, 6 tables.

  4. Development of a national audit tool for juvenile idiopathic arthritis: a BSPAR project funded by the Health Care Quality Improvement Partnership.

    Science.gov (United States)

    McErlane, Flora; Foster, Helen E; Armitt, Gillian; Bailey, Kathryn; Cobb, Joanna; Davidson, Joyce E; Douglas, Sharon; Fell, Andrew; Friswell, Mark; Pilkington, Clarissa; Strike, Helen; Smith, Nicola; Thomson, Wendy; Cleary, Gavin

    2018-01-01

    Timely access to holistic multidisciplinary care is the core principle underpinning management of juvenile idiopathic arthritis (JIA). Data collected in national clinical audit programmes fundamentally aim to improve health outcomes of disease, ensuring clinical care is equitable, safe and patient-centred. The aim of this study was to develop a tool for national audit of JIA in the UK. A staged and consultative methodology was used across a broad group of relevant stakeholders to develop a national audit tool, with reference to pre-existing standards of care for JIA. The tool comprises key service delivery quality measures assessed against two aspects of impact, namely disease-related outcome measures and patient/carer reported outcome and experience measures. Eleven service-related quality measures were identified, including those that map to current standards for commissioning of JIA clinical services in the UK. The three-variable Juvenile Arthritis Disease Activity Score and presence/absence of sacro-iliitis in patients with enthesitis-related arthritis were identified as the primary disease-related outcome measures, with presence/absence of uveitis a secondary outcome. Novel patient/carer reported outcomes and patient/carer reported experience measures were developed and face validity confirmed by relevant patient/carer groups. A tool for national audit of JIA has been developed with the aim of benchmarking current clinical practice and setting future standards and targets for improvement. Staged implementation of this national audit tool should facilitate investigation of variability in levels of care and drive quality improvement. This will require engagement from patients and carers, clinical teams and commissioners of JIA services. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology.

  5. Development of a national audit tool for juvenile idiopathic arthritis: a BSPAR project funded by the Health Care Quality Improvement Partnership

    Science.gov (United States)

    McErlane, Flora; Foster, Helen E; Armitt, Gillian; Bailey, Kathryn; Cobb, Joanna; Davidson, Joyce E; Douglas, Sharon; Fell, Andrew; Friswell, Mark; Pilkington, Clarissa; Strike, Helen; Smith, Nicola; Thomson, Wendy; Cleary, Gavin

    2018-01-01

    Abstract Objective Timely access to holistic multidisciplinary care is the core principle underpinning management of juvenile idiopathic arthritis (JIA). Data collected in national clinical audit programmes fundamentally aim to improve health outcomes of disease, ensuring clinical care is equitable, safe and patient-centred. The aim of this study was to develop a tool for national audit of JIA in the UK. Methods A staged and consultative methodology was used across a broad group of relevant stakeholders to develop a national audit tool, with reference to pre-existing standards of care for JIA. The tool comprises key service delivery quality measures assessed against two aspects of impact, namely disease-related outcome measures and patient/carer reported outcome and experience measures. Results Eleven service-related quality measures were identified, including those that map to current standards for commissioning of JIA clinical services in the UK. The three-variable Juvenile Arthritis Disease Activity Score and presence/absence of sacro-iliitis in patients with enthesitis-related arthritis were identified as the primary disease-related outcome measures, with presence/absence of uveitis a secondary outcome. Novel patient/carer reported outcomes and patient/carer reported experience measures were developed and face validity confirmed by relevant patient/carer groups. Conclusion A tool for national audit of JIA has been developed with the aim of benchmarking current clinical practice and setting future standards and targets for improvement. Staged implementation of this national audit tool should facilitate investigation of variability in levels of care and drive quality improvement. This will require engagement from patients and carers, clinical teams and commissioners of JIA services. PMID:29069424

  6. An improved method for calculation of interface pressure force in PLIC-VOF methods

    International Nuclear Information System (INIS)

    Sefollahi, M.; Shirani, E.

    2004-08-01

    Conventional methods for the modeling of surface tension force in Piecewise Linear Interface Calculation-Volume of Fluid (PLIC-VOF) methods, such as Continuum Surface Force (CSF), Continuum Surface Stress (CSS) and also Meier's method, convert the surface tension force into a body force. Not only do they include the force in the interfacial cells but also in the neighboring cells. Thus they produce spurious currents. Also the pressure jump, due to the surface tension, is not calculated accurately in these methods. In this paper a more accurate method for the application of interface force in the computational modeling of free surfaces and interfaces which use PLIC-VOF methods is developed. This method is based on the evaluation of the surface tension force only in the interfacial cells and not the neighboring cells. Also the normal and the interface surface area needed for the calculation of the surface tension force is calculated more accurately. The present method is applied to a two-dimensional motionless drop of liquid and a bubble of gas as well as a non-circular two-dimensional drop, which oscillates due to the surface tension force, in an initially stagnant fluid with no gravity force. The results are compared with the results of the cases when CSF, CSS and Meier's methods are used. It is shown that the present method calculates pressure jump at the interface more accurately and produces less spurious currents comparing to CSS an CSF models. (author)

  7. Investigation of China’s national public relations strategy under globalization : the hotspots around the national media

    OpenAIRE

    雷, 紫雯

    2014-01-01

    This study investigates on China’s national public relations strategy under the globalization by analyzing the national media. In recent years, in order to improve the global public opinion environment, and to improve its national public relations capabilities that match its economic power status, China has actively strengthened its national public relations strategies, including making the national “media go out”, and building world-class media. By researching on the localization of Chinese ...

  8. Change in Adverse Events After Enrollment in the National Surgical Quality Improvement Program: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Joshua Montroy

    Full Text Available The American College of Surgeons' National Surgical Quality Improvement Program (NSQIP is the first nationally validated, risk-adjusted, outcomes-based program to measure and compare the quality of surgical care across North America. Participation in this program may provide an opportunity to reduce the incidence of adverse events related to surgery.A systematic review of the literature was performed. MedLine, EMBASE and PubMed were searched for studies relevant to NSQIP. Patient characteristics, intervention, and primary outcome measures were abstracted. The intervention was participation in NSQIP and monitoring of Individual Site Summary Reports with or without implementation of a quality improvement program. The outcomes of interest were change in peri-operative adverse events and mortality represented by pooled risk ratios (pRR and 95% confidence intervals (CI.Eleven articles reporting on 35 health care institutions were included. Nine (82% of the eleven studies implemented a quality improvement program. Minimal improvements in superficial (pRR 0.81; 95% CI 0.72-0.91, deep (pRR 0.82; 95% CI0.64-1.05 and organ space (pRR 1.15; 95% CI 0.96-1.37 infections were observed at centers that did not institute a quality improvement program. However, centers that reported formal interventions for the prevention and treatment of infections observed substantial improvements (superficial pRR 0.55, 95% CI 0.39-0.77; deep pRR 0.61, 95% CI 0.50-0.75, and organ space pRR 0.60, 95% CI 0.50-0.71. Studies evaluating other adverse events noted decreased incidence following NSQIP participation and implementation of a formal quality improvement program.These data suggest that NSQIP is effective in reducing surgical morbidity. Improvement in surgical quality appears to be more marked at centers that implemented a formal quality improvement program directed at the reduction of specific morbidities.

  9. Weldon Spring dose calculations

    International Nuclear Information System (INIS)

    Dickson, H.W.; Hill, G.S.; Perdue, P.T.

    1978-09-01

    In response to a request by the Oak Ridge Operations (ORO) Office of the Department of Energy (DOE) for assistance to the Department of the Army (DA) on the decommissioning of the Weldon Spring Chemical Plant, the Health and Safety Research Division of the Oak Ridge National Laboratory (ORNL) performed limited dose assessment calculations for that site. Based upon radiological measurements from a number of soil samples analyzed by ORNL and from previously acquired radiological data for the Weldon Spring site, source terms were derived to calculate radiation doses for three specific site scenarios. These three hypothetical scenarios are: a wildlife refuge for hunting, fishing, and general outdoor recreation; a school with 40 hr per week occupancy by students and a custodian; and a truck farm producing fruits, vegetables, meat, and dairy products which may be consumed on site. Radiation doses are reported for each of these scenarios both for measured uranium daughter equilibrium ratios and for assumed secular equilibrium. Doses are lower for the nonequilibrium case

  10. The National Heart Failure Project: a health care financing administration initiative to improve the care of Medicare beneficiaries with heart failure.

    Science.gov (United States)

    Masoudi, F A; Ordin, D L; Delaney, R J; Krumholz, H M; Havranek, E P

    2000-01-01

    This is the second in a series describing Health Care Financing Administration (HCFA) initiatives to improve care for Medicare beneficiaries with heart failure. The first article outlined the history of HCFA quality-improvement projects and current initiatives to improve care in six priority areas: heart failure, acute myocardial infarction, stroke, pneumonia, diabetes, and breast cancer. This article details the objectives and design of the Medicare National Heart Failure Quality Improvement Project (NHF), which has as its goal the improvement of inpatient heart failure care. (c)2000 by CHF, Inc.

  11. Three-dimensional electron-beam dose calculations

    International Nuclear Information System (INIS)

    Shiu, A.S.

    1988-01-01

    The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron-beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements were incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. A pencil-beam redefinition model was developed for the calculation of electron-beam dose distributions in three dimensions

  12. An adaptive sampling scheme for deep-penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Ji, Zhicheng; Pei, Lucheng

    2013-01-01

    As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree

  13. LANDFIRE 2010—Updates to the national dataset to support improved fire and natural resource management

    Science.gov (United States)

    Nelson, Kurtis J.; Long, Donald G.; Connot, Joel A.

    2016-02-29

    The Landscape Fire and Resource Management Planning Tools (LANDFIRE) 2010 data release provides updated and enhanced vegetation, fuel, and fire regime layers consistently across the United States. The data represent landscape conditions from approximately 2010 and are the latest release in a series of planned updates to maintain currency of LANDFIRE data products. Enhancements to the data products included refinement of urban areas by incorporating the National Land Cover Database 2006 land cover product, refinement of agricultural lands by integrating the National Agriculture Statistics Service 2011 cropland data layer, and improved wetlands delineations using the National Land Cover Database 2006 land cover and the U.S. Fish and Wildlife Service National Wetlands Inventory data. Disturbance layers were generated for years 2008 through 2010 using remotely sensed imagery, polygons representing disturbance events submitted by local organizations, and fire mapping program data such as the Monitoring Trends in Burn Severity perimeters produced by the U.S. Geological Survey and the U.S. Forest Service. Existing vegetation data were updated to account for transitions in disturbed areas and to account for vegetation growth and succession in undisturbed areas. Surface and canopy fuel data were computed from the updated vegetation type, cover, and height and occasionally from potential vegetation. Historical fire frequency and succession classes were also updated. Revised topographic layers were created based on updated elevation data from the National Elevation Dataset. The LANDFIRE program also released a new Web site offering updated content, enhanced usability, and more efficient navigation.

  14. Calculation of saturated hydraulic conductivity of bentonite

    International Nuclear Information System (INIS)

    He Jun

    2006-01-01

    Hydraulic conductivity test has some defects such as weak repeatability, time-consuming. Taking bentonite as dual porous media, the calculation formula of the distance, d 2 , between montmorillonite in intraparticle pores is deduced. Improved calculated method of hydraulic conductivity is obtained using d 2 and Poiseuille law. The method is valid through the comparison with results of test and other methods. The method is very convenient to calculate hydraulic conductivity of bentonite of certain montmorillonite content and void ratio. (authors)

  15. Calculation of magnetic hyperfine constants

    International Nuclear Information System (INIS)

    Bufaical, R.F.; Maffeo, B.; Brandi, H.S.

    1975-01-01

    The magnetic hyperfine constants of the V sub(K) center in CaF 2 , SrF 2 and BaF 2 have been calculated assuming a phenomenological model, based on the F 2 - 'central molucule', to describe the wavefunction of the defect. Calculations have shown that introduction of a small degree of covalence, between this central molecule and neighboring ions, is necessary to improve the electronic structure description of the defect. It was also shown that the results for the hyperfine constants are strongly dependent on the relaxations of the ions neighboring the central molecule; these relaxations have been determined by fitting the experimental data. The present results are compared with other previous calculations where similar and different theoretical methods have been used

  16. Breastfeeding in Mexico was stable, on average, but deteriorated among the poor, whereas complementary feeding improved: results from the 1999 to 2006 National Health and Nutrition Surveys.

    Science.gov (United States)

    González de Cossío, Teresita; Escobar-Zaragoza, Leticia; González-Castell, Dinorah; Reyes-Vázquez, Horacio; Rivera-Dommarco, Juan A

    2013-05-01

    We present: 1) indicators of infant and young child feeding practices (IYCFP) and median age of introduction of foods analyzed by geographic and socioeconomic variables for the 2006 national probabilistic Health Nutrition Survey (ENSANUT-2006); and 2) changes in IYCFP indicators between the 1999 national probabilistic Nutrition Survey and ENSANUT-2006, analyzed by the same variables. Participants were women 12-49 y and their <2-y-old children (2953 in 2006 and 3191 in 1999). Indicators were estimated with the status quo method. The median age of introduction of foods was calculated by the Kaplan-Meier method using recall data. The national median duration of breastfeeding was similar in both surveys, 9.7 mo in 1999 and 10.4 mo in 2006, but decreased in the vulnerable population. In 1999 indigenous women breastfed 20.8 mo but did so for only 13.0 mo in 2006. The national percentage of those exclusively breastfeeding <6 mo also remained stable: 20% in 1999 and 22.3% in 2006. Nevertheless, exclusively breastfeeding <6 mo changed within the indigenous population, from 46% in 1999 to 34.5% in 2006. Between surveys, most breastfeeding indicators had lower values in vulnerable populations than in those better-off. Complementary feeding, however, improved overall. Complementary feeding was inadequately timed: median age of introduction of plain water was 3 mo, formula and non-human milk was 5 mo, and cereals, legumes, and animal foods was 5 mo. Late introduction of animal foods occurred among vulnerable indigenous population when 50% consumed these products at 8 mo. Mexican IYCFP indicate that public policy must protect breastfeeding while promoting the timely introduction of complementary feeding.

  17. Improvement of air transport data and wall transmission/reflection data in the SKYSHINE code. 2. Calculation of gamma-ray wall transmission and reflection data

    Energy Technology Data Exchange (ETDEWEB)

    Hayashida, Yoshihisa [Toshiba Corp., Kawasaki, Kanagawa (Japan); Ishikawa, Satoshi; Harima, Yoshiko [CRC Research Institute Inc., Tokyo (Japan); Hayashi, Katsumi; Tayama, Ryuichi [Hitachi Engineering Co. Ltd., Ibaraki (Japan); Hirayama, Hideo [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Sakamoto, Yukio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Nemoto, Makoto [Visible Information Center, Tokai, Ibaraki (Japan); Sato, Osamu [Mitsubishi Research Inst., Inc., Tokyo (Japan)

    2000-03-01

    Transmission and reflection data of concrete and steel for 6.2 MeV gamma-ray in the SKYSHINE code have been generated using up-to-date data and method with a view to improving an accuracy of results. The transmission and reflection data depend on energy and angle. The invariant embedding method, which has merits of producing no negative angular flux and of taking small computer time, is suitable and adopted to the present purpose. Transmission data were calculated for concrete of 12 {approx} 160 cm thick and steel of 4 {approx} 39 cm thick based on the PHOTX library. Reflection data were calculated for semi-infinite slabs of concrete and steel. Consequently, smooth and consistent differential data over whole angle and energy were obtained compared with the original data calculated by discrete ordinates Sn code and Monte Carlo code. In order to use these data in the SKYSHINE code, further verification is needed using various calculation method or experimental data. (author)

  18. OrthoANI: An improved algorithm and software for calculating average nucleotide identity.

    Science.gov (United States)

    Lee, Imchang; Ouk Kim, Yeong; Park, Sang-Cheol; Chun, Jongsik

    2016-02-01

    Species demarcation in Bacteria and Archaea is mainly based on overall genome relatedness, which serves a framework for modern microbiology. Current practice for obtaining these measures between two strains is shifting from experimentally determined similarity obtained by DNA-DNA hybridization (DDH) to genome-sequence-based similarity. Average nucleotide identity (ANI) is a simple algorithm that mimics DDH. Like DDH, ANI values between two genome sequences may be different from each other when reciprocal calculations are compared. We compared 63 690 pairs of genome sequences and found that the differences in reciprocal ANI values are significantly high, exceeding 1 % in some cases. To resolve this problem of not being symmetrical, a new algorithm, named OrthoANI, was developed to accommodate the concept of orthology for which both genome sequences were fragmented and only orthologous fragment pairs taken into consideration for calculating nucleotide identities. OrthoANI is highly correlated with ANI (using BLASTn) and the former showed approximately 0.1 % higher values than the latter. In conclusion, OrthoANI provides a more robust and faster means of calculating average nucleotide identity for taxonomic purposes. The standalone software tools are freely available at http://www.ezbiocloud.net/sw/oat.

  19. Direct and indirect nitrous oxide emissions from agricultural soils, 1990 - 2003. Background document on the calculation method for the Dutch National Inventory Report

    International Nuclear Information System (INIS)

    Van der Hoek, K.W.; Van Schijndel, M.W.; Kuikman, P.J.

    2007-01-01

    Since 2005 the Dutch method to calculate the nitrous oxide emissions from agricultural soils has fully complied with the Intergovernmental Panel on Climate Change (IPCC) Good Practice Guidelines. In order to meet the commitments of the Convention on Climate Change and the Kyoto Protocol, nitrous oxide emissions have to be reported annually in the Dutch National Inventory Report (NIR). Countries are encouraged to use country-specific data rather than the default values provided by the IPCC. This report describes the calculation schemes and data sources used for nitrous oxide emissions from agricultural soils in the Netherlands. The nitrous oxide emissions, which contribute to the greenhouse effect, occur due to nitrification and denitrification processes. They include direct emissions from agricultural soils due to the application of animal manure and fertilizer nitrogen and the manure production in the meadow. Also included are indirect emissions resulting from the subsequent leaching of nitrate to ground water and surface waters, and from deposition of ammonia that had volatilized as a result of agricultural activities. Before 2005 indirect emissions in the Netherlands were calculated using a method that did not compare well with IPCC definitions and categories. The elaborate explanation here should facilitate reviewing by experts. Finally, the report also presents an overview of the nitrous oxide emissions from agricultural soils and the underlying data used in the 1990 - 2003 period

  20. Calculation of power spectra for block coded signals

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2001-01-01

    We present some improvements in the procedure for calculating power spectra of signals based on finite state descriptions and constant block size. In addition to simplified calculations, our results provide some insight into the form of the closed expressions and to the relation between the spect...

  1. Intraocular lens calculation adjustment after laser refractive surgery using Scheimpflug imaging.

    Science.gov (United States)

    Schuster, Alexander K; Schanzlin, David J; Thomas, Karin E; Heichel, Christopher W; Purcell, Tracy L; Barker, Patrick D

    2016-02-01

    To test a new method of intraocular lens (IOL) calculation after corneal refractive surgery using Scheimpflug imaging (Pentacam HR) and partial coherence interferometry (PCI) (IOLMaster) that does not require historical data; that is, the Schuster/Schanzlin-Thomas-Purcell (SToP) IOL calculator. Shiley Eye Center, San Diego, California, and Walter Reed National Military Medical Center, Bethesda, Maryland, USA. Retrospective data analysis and validation study. Data were retrospectively collected from patient charts including data from Scheimpflug imaging and refractive history. Target refraction was calculated using PCI and the Holladay 1 and SRK/T formulas. Regression analysis was performed to explain the deviation of the target refraction, taking into account the following influencing factors: ratio of posterior-to-anterior corneal radius, axial length (AL), and anterior corneal radius. The regression analysis study included 61 eyes (39 patients) that had laser in situ keratomileusis (57 eyes) or photorefractive keratectomy (4 eyes) and subsequent cataract. Two factors were found that explained the deviation of the target refraction using the Holladay 1 formula; that is, the ratio of the corneal radii and the AL and the ratio of corneal radii for the SRK/T formula. A new IOL adjustment calculator was derived and validated at a second center using 14 eyes (10 patients). The error in IOL calculation for normal eyes after laser refractive treatment was related to the ratio of posterior-to-anterior corneal radius. A formula requiring Scheimpflug data and suggested IOL power only yielded an improved postoperative result for patients with previous corneal laser refractive surgery having cataract surgery. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. All rights reserved.

  2. Experience from implementing international standards in national emergency response planning national adjustments and suggestions for improvements

    International Nuclear Information System (INIS)

    Naadland Holo, E.

    2003-01-01

    Full text: A process has been going on for some time in Norway to establish a harmonized background for emergency response planning for any kind of nuclear or radiological accident. The national emergency preparedness organisation with the crisis committee for nuclear accident, consisting of representatives from civil defence, defence, police-, health-, and food control authorities, has the authority to implement countermeasures to protect health, environment and national interests in case of an accident or in case of nuclear terrorism. However, in an early phase, the response plans need to be fully harmonized to ensure that every operational level knows their responsibility and the responsibilities of others. Our intention is to implement the IAEA standard 'preparedness and response for a nuclear or radiological emergency'. We believe this will simplify national and international communication and also simplify the crisis management if an accident occurs. In revising the national plans, and also the planning basis at regional and local level, as well as the planning basis for response to accidents at national nuclear facilities and in connection with arrival of nuclear submarines in Norwegian harbours, we have seen the need to make national adjustments to the international standards. In addition to the standard, there exist several other processes and routines for reporting different kinds of incidents. We have seen a need to coordinate this internally at the competent authority to simplify the routines. This paper will focus on the challenges we have met, our national solutions and some suggestions for simplification. National adjustments to the international standard. - Firstly, the threat categorization needs to be adjusted. First of all, we do not have nuclear power plants in Norway. In the aftermath of 11 September 2001 we also have focused more an the potential for nuclear terrorism. Nuclear terrorism is unlikely but puts up some new requirements in the

  3. Improving the Methods for Accounting the Coverages of Payments to Employees

    Directory of Open Access Journals (Sweden)

    Zhurakovska Iryna V.

    2017-03-01

    Full Text Available The article is aimed at exploring the theoretical and practical problems of accounting the coverages of payments to employees and developing on this basis ways of addressing them. An analysis of both the international and the national accounting standards, practices of domestic enterprises, as well as scientific works of scientists, has helped to identify the problematic issues of accounting the coverages of payments to employees, including: ignoring the disclosure in accounting and reporting, absence of an adequate documentary support, complexity of the calculation methods, etc. The authors have suggested ways to improve accounting of payments to employees: documentation of coverages through the development of a Statement of the accrued coverages, simplification of calculation of payments to employees together with the related reflecting in the analytical accounting, disclosure in the accounting policy, and so forth. Such decisions would improve accounting the coverages of payments to employees, increase the frequency of applying such coverages in enterprises and their disclosure in the financial statements.

  4. Improved Frequency Fluctuation Model for Spectral Line Shape Calculations in Fusion Plasmas

    International Nuclear Information System (INIS)

    Ferri, S.; Calisti, A.; Mosse, C.; Talin, B.; Lisitsa, V.

    2010-01-01

    A very fast method to calculate spectral line shapes emitted by plasmas accounting for charge particle dynamics and effects of an external magnetic field is proposed. This method relies on a new formulation of the Frequency Fluctuation Model (FFM), which yields to an expression of the dynamic line profile as a functional of the static distribution function of frequencies. This highly efficient formalism, not limited to hydrogen-like systems, allows to calculate pure Stark and Stark-Zeeman line shapes for a wide range of density, temperature and magnetic field values, which is of importance in plasma physics and astrophysics. Various applications of this method are presented for conditions related to fusion plasmas.

  5. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  6. SU-E-T-552: Monte Carlo Calculation of Correction Factors for a Free-Air Ionization Chamber in Support of a National Air-Kerma Standard for Electronic Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Mille, M; Bergstrom, P [National Institute of Standards and Technology, Gaithersburg, MD (United States)

    2015-06-15

    Purpose: To use Monte Carlo radiation transport methods to calculate correction factors for a free-air ionization chamber in support of a national air-kerma standard for low-energy, miniature x-ray sources used for electronic brachytherapy (eBx). Methods: The NIST is establishing a calibration service for well-type ionization chambers used to characterize the strength of eBx sources prior to clinical use. The calibration approach involves establishing the well-chamber’s response to an eBx source whose air-kerma rate at a 50 cm distance is determined through a primary measurement performed using the Lamperti free-air ionization chamber. However, the free-air chamber measurements of charge or current can only be related to the reference air-kerma standard after applying several corrections, some of which are best determined via Monte Carlo simulation. To this end, a detailed geometric model of the Lamperti chamber was developed in the EGSnrc code based on the engineering drawings of the instrument. The egs-fac user code in EGSnrc was then used to calculate energy-dependent correction factors which account for missing or undesired ionization arising from effects such as: (1) attenuation and scatter of the x-rays in air; (2) primary electrons escaping the charge collection region; (3) lack of charged particle equilibrium; (4) atomic fluorescence and bremsstrahlung radiation. Results: Energy-dependent correction factors were calculated assuming a monoenergetic point source with the photon energy ranging from 2 keV to 60 keV in 2 keV increments. Sufficient photon histories were simulated so that the Monte Carlo statistical uncertainty of the correction factors was less than 0.01%. The correction factors for a specific eBx source will be determined by integrating these tabulated results over its measured x-ray spectrum. Conclusion: The correction factors calculated in this work are important for establishing a national standard for eBx which will help ensure that dose

  7. Calculations of optical rotation: Influence of molecular structure

    Directory of Open Access Journals (Sweden)

    Yu Jia

    2012-01-01

    Full Text Available Ab initio Hartree-Fock (HF method and Density Functional Theory (DFT were used to calculate the optical rotation of 26 chiral compounds. The effects of theory and basis sets used for calculation, solvents influence on the geometry and values of calculated optical rotation were all discussed. The polarizable continuum model, included in the calculation, did not improve the accuracy effectively, but it was superior to γs. Optical rotation of five or sixmembered of cyclic compound has been calculated and 17 pyrrolidine or piperidine derivatives which were calculated by HF and DFT methods gave acceptable predictions. The nitrogen atom affects the calculation results dramatically, and it is necessary in the molecular structure in order to get an accurate computation result. Namely, when the nitrogen atom was substituted by oxygen atom in the ring, the calculation result deteriorated.

  8. Innovating for quality and value: Utilizing national quality improvement programs to identify opportunities for responsible surgical innovation.

    Science.gov (United States)

    Woo, Russell K; Skarsgard, Erik D

    2015-06-01

    Innovation in surgical techniques, technology, and care processes are essential for improving the care and outcomes of surgical patients, including children. The time and cost associated with surgical innovation can be significant, and unless it leads to improvements in outcome at equivalent or lower costs, it adds little or no value from the perspective of the patients, and decreases the overall resources available to our already financially constrained healthcare system. The emergence of a safety and quality mandate in surgery, and the development of the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) allow needs-based surgical care innovation which leads to value-based improvement in care. In addition to general and procedure-specific clinical outcomes, surgeons should consider the measurement of quality from the patients' perspective. To this end, the integration of validated Patient Reported Outcome Measures (PROMs) into actionable, benchmarked institutional outcomes reporting has the potential to facilitate quality improvement in process, treatment and technology that optimizes value for our patients and health system. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  10. Improving Battery Reactor Core Design Using Optimization Method

    International Nuclear Information System (INIS)

    Son, Hyung M.; Suh, Kune Y.

    2011-01-01

    The Battery Omnibus Reactor Integral System (BORIS) is a small modular fast reactor being designed at Seoul National University to satisfy various energy demands, to maintain inherent safety by liquid-metal coolant lead for natural circulation heat transport, and to improve power conversion efficiency with the Modular Optimal Balance Integral System (MOBIS) using the supercritical carbon dioxide as working fluid. This study is focused on developing the Neutronics Optimized Reactor Analysis (NORA) method that can quickly generate conceptual design of a battery reactor core by means of first principle calculations, which is part of the optimization process for reactor assembly design of BORIS

  11. Development of the code for filter calculation

    International Nuclear Information System (INIS)

    Gritzay, O.O.; Vakulenko, M.M.

    2012-01-01

    This paper describes a calculation method, which commonly used in the Neutron Physics Department to develop a new neutron filter or to improve the existing neutron filter. This calculation is the first step of the traditional filter development procedure. It allows easy selection of the qualitative and quantitative contents of a composite filter in order to receive the filtered neutron beam with given parameters

  12. A quantitative calculation for software reliability evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young-Jun; Lee, Jang-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    To meet these regulatory requirements, the software used in the nuclear safety field has been ensured through the development, validation, safety analysis, and quality assurance activities throughout the entire process life cycle from the planning phase to the installation phase. A variety of activities, such as the quality assurance activities are also required to improve the quality of a software. However, there are limitations to ensure that the quality is improved enough. Therefore, the effort to calculate the reliability of the software continues for a quantitative evaluation instead of a qualitative evaluation. In this paper, we propose a quantitative calculation method for the software to be used for a specific operation of the digital controller in an NPP. After injecting random faults in the internal space of a developed controller and calculating the ability to detect the injected faults using diagnostic software, we can evaluate the software reliability of a digital controller in an NPP. We tried to calculate the software reliability of the controller in an NPP using a new method that differs from a traditional method. It calculates the fault detection coverage after injecting the faults into the software memory space rather than the activity through the life cycle process. We attempt differentiation by creating a new definition of the fault, imitating the software fault using the hardware, and giving a consideration and weights for injection faults.

  13. Improved the accuracy of 99mTc-MAG3 plasma clearance method. The problem of the calculated plasma volume and its modification

    International Nuclear Information System (INIS)

    Watanabe, Nami; Komatani, Akio; Yamaguchi, Koichi; Takahashi, Kazuei

    1998-01-01

    The 99m Tc-MAG 3 plasma clearance method (MPC method), reported by Oriuchi et al., is a simple and useful count-based gamma camera method for calculating the 99m Tc-MAG 3 plasma clearance (CL MAG ). However, a discrepancy of CL MAG calculated by MPC method (MPC-CL MAG ) from the tubular extraction rate (TER) calculated by Russell's single-sample clearance determination (Russell-TER) was noted. The calculated plasma volume is assumed to be the cause. Since the plasma volume is reported to have a linear correlation with body surface area, Dissmann's formula was applied to calculate the plasma volume. Then Dissmann's formula was replaced by Ogawa's formula in the MPC method, and the procedure was then called the modified MPC method. The CL MAG were obtained using MPC method, modified MPC method and the TER was obtained Russell's method in 95 patients with urological disorders. Then the MPC-CL MAG and modified MPC-CL MAG were compared with Russell-TER. Comparison of the MPC-CL MAG with the Russell-TER demonstrated a coefficient of correlation of 0.82, but dissociation of the slope of regression lines was found between males and females. The modified MPC-CL MAG improved the coefficient of correlation to 0.92, and diminished the dissociation of the slope of regression lines between males and females. We verified that the dissociation was due to the plasma volume calculated by Ogawa's formula. Ogawa's formula included hematocrit, body weight, body height and different coefficients for gender. The plasma volume calculated by Ogawa's formula were lower in males and higher in females than that calculated by Dissmann's formula. And marked discrepancy in the plasma volume in patients with a body surface area below 0.5 m 2 was observed. So the MPC method might become more accurate by substituting Dissmann's formula for Ogawa's formula resoluting in a method that is applicable to both males and females, children and adults in clinical use. (author)

  14. The power of the National Surgical Quality Improvement Program--achieving a zero pneumonia rate in general surgery patients.

    Science.gov (United States)

    Fuchshuber, Pascal R; Greif, William; Tidwell, Chantal R; Klemm, Michael S; Frydel, Cheryl; Wali, Abdul; Rosas, Efren; Clopp, Molly P

    2012-01-01

    The National Surgical Quality Improvement Program (NSQIP) of the American College of Surgeons provides risk-adjusted surgical outcome measures for participating hospitals that can be used for performance improvement of surgical mortality and morbidity. A surgical clinical nurse reviewer collects 135 clinical variables including preoperative risk factors, intraoperative variables, and 30-day postoperative mortality and morbidity outcomes for patients undergoing major surgical procedures. A report on mortality and complications is prepared twice a year. This article summarizes briefly the history of NSQIP and how its report on surgical outcomes can be used for performance improvement within a hospital system. In particular, it describes how to drive performance improvement with NSQIP data using the example of postoperative respiratory complications--a major factor of postoperative mortality. In addition, this article explains the benefit of a collaborative of several participating NSQIP hospitals and describes how to develop a "playbook" on the basis of an outcome improvement project.

  15. Radionuclide inventory calculation in VVER and BWR reactor

    International Nuclear Information System (INIS)

    Bouhaddane, A.; Farkas, F.; Slugen, V.; Ackermann, L.; Schienbein, M.

    2014-01-01

    The paper shows different aspects in the radionuclide inventory determination. Precise determination of the neutron flux distribution, presented for a BRW reactor, is vital for the activation calculations. The precision can be improved utilizing variance reduction methods as importance treatment, weight windows etc. Direct calculation of the radionuclide inventory via Monte Carlo code is presented for a VVER reactor. Burn-up option utilized in this calculation appears to be proper for reactor internal components. However, it will not be probably effective outside the reactor core. Further calculations in this area are required to support the forth-set findings. (authors)

  16. 24 CFR 203.281 - Calculation of one-time MIP.

    Science.gov (United States)

    2010-04-01

    ... HOUSING AND URBAN DEVELOPMENT MORTGAGE AND LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Contract Rights and Obligations Mortgage Insurance Premiums-One-Time Payment § 203.281 Calculation of one-time MIP. (a) The applicable premium percentage determined...

  17. Calculation of displacement and helium production at the LAMPF irradiation facility

    International Nuclear Information System (INIS)

    Wechsler, M.S.; Davidson, D.R.; Sommer, W.F.; Greenwood, L.R.

    1985-01-01

    Differential and total displacement and helium-production rates are calculated for copper irradiated by spallation neutrons and 760-MeV protons at LAMPF. The calculations are performed using the SPECTOR and VNMTC computer codes, the latter being specially designed for spallation radiation-damage calculations. For comparison, similar SPECTER calculations are also described for irradiation of copper in the experimental breeder reactor (EBR-II) at the Argonne National Laboratory-West in Idaho, and in the rotating target neutron source (RTNS-II) at Lawrence Livermore Laboratory. The neutron energy spectra for LAMPF, EBR-II, and RTNS-II and the displacement and helium-production cross sections are shown

  18. Improvement of some ornamental plants by induced somatic mutations at National Botanical Research Institute

    International Nuclear Information System (INIS)

    Gupta, M.N.

    1980-01-01

    Research work on improvement of some ornamental plants by induced somatic mutations has been in progress at the National Botanical Research Institute, Lucknow, since 1964. The methods of treatments with gamma rays, detection, isolation and multiplication of induced somatic mutations have been given for Bougainvillea, Chrysanthemum, perennial Portulaca, rose and tuberose. During the last 15 years, a total of 38 new cultivars of different ornamentals evolved by gamna induced somatic mutations have been released. They include Bougainvillea 1; Chrysanthemum 28; perennial portulaca 6; rose 1 and tuberose 2. Descriptions of the original cultivars and their gamma induced mutants are given along with other pertinent details. (author)

  19. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    Science.gov (United States)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer

  20. How to be Brilliant at Using a Calculator

    CERN Document Server

    Webber, Beryl

    2010-01-01

    Contains 40 worksheets designed to improve pupils' understanding of numbers, fractions, percentages, algebra and data handling. They will learn about: the keys of a calculator; how to do addition, subtraction, multiplication and division; how to check their answer approximately in their head; the game of secret numbers; calculator logic; square numbers and number patterns; money.

  1. Opacity calculations for laser plasma applications

    International Nuclear Information System (INIS)

    Magee, N.H. Jr.

    1986-01-01

    The Los Alamos LTE light element detailed configuration opacity code (LEDCOP) has been revised to provide more accurate absorption coefficients and group means for modern radiation-hydrodynamic codes. The new group means will be especially useful for computing the transport of thermal radiation from laser deposition. The principal improvement is the inclusion of a complete set of accurate and internally consistent LS term energies and oscillator strengths in both the EOS and absorption coefficients. Selected energies and oscillator strengths were calculated from a Hartree-Fock code, then fitted by a quantum defect method. This allowed transitions at all wavelengths to be treated consistently and accurately instead of being limited to wavelength regions covered by experimental observations or isolated theoretical calculations. A second improvement is the use of more accurate photoionization cross sections for excited as well as ground state configurations. These cross sections are now more consistent with the bound-bound oscillator strengths, leading to a smooth transition across the continuum limit. Results will be presented showing the agreement of the LS term energies and oscillator strengths with observed values. The new absorption coefficients will be compared with previous calculations. 5 refs., 9 figs., 1 tab

  2. A new method to calculate permeability of gob for air leakage calculations and for improvements in methane control

    Energy Technology Data Exchange (ETDEWEB)

    Karacan, C.O. [National Inst. for Occupational Safety and Health, Pittsburgh, PA (United States). Office of Mine Safety and Health Research

    2010-07-01

    Although longwall underground mining can maximize coal production, it causes large scale disturbances of the surrounding rock mass due to fracturing and caving of the mine roof as the mine face advances. The porosity and permeability of the longwall gob can affect the methane and air flow patterns considerably. Since methane is a major hazard in underground coal mining operations, extensive methane control techniques are used to supplement the existing mine ventilation system, such as gob gas ventholes (GGV). However, the gob is rarely accessible for performing direct measurements of porosity and permeability. Therefore, this study presented a fractal approach for calculating the porosity and permeability from the size distribution of broken rock material in the gob, which can be determined from image analyses. The fractal approach constructs flow equations and fractal crushing equations for granular materials to predict porosity for a completely fragmented porous medium. The virtual fragmented fractal porous medium is exposed to various uniaxial stresses to simulate gob compaction and porosity and permeability changes during this process. It was concluded that the use of this fractal approach will result in better predictions regarding the flow amount and flow patterns in the gob, and facilitate leakage calculations and methane control projections. 29 refs., 4 tabs., 5 figs.

  3. National Clinical Skills Competition: an effective simulation-based method to improve undergraduate medical education in China.

    Science.gov (United States)

    Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan

    2016-01-01

    The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China.

  4. Uncertainty calculations made easier

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-07-01

    The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)

  5. Calculational approach to ionization spectrometer design

    International Nuclear Information System (INIS)

    Gabriel, T.A.

    1974-01-01

    Many factors contribute to the design and overall performance of an ionization spectrometer. These factors include the conditions under which the spectrometer is to be used, the required performance, the development of the hadronic and electromagnetic cascades, leakage and binding energies, saturation effects of densely ionizing particles, nonuniform light collection, sampling fluctuations, etc. The calculational procedures developed at Oak Ridge National Laboratory that have been applied to many spectrometer designs and that include many of the influencing factors in spectrometer design are discussed. The incident-particle types which can be considered with some generality are protons, neutrons, pions, muons, electrons, positrons, and gamma rays. Charged kaons can also be considered but with less generality. The incident-particle energy range can extend into the hundreds of GeV range. The calculations have been verified by comparison with experimental data but only up to approximately 30 GeV. Some comparisons with experimental data are also discussed and presented so that the flexibility of the calculational methods can be demonstrated. (U.S.)

  6. Travel Time to Hospital for Childbirth: Comparing Calculated Versus Reported Travel Times in France.

    Science.gov (United States)

    Pilkington, Hugo; Prunet, Caroline; Blondel, Béatrice; Charreire, Hélène; Combier, Evelyne; Le Vaillant, Marc; Amat-Roze, Jeanne-Marie; Zeitlin, Jennifer

    2018-01-01

    Objectives Timely access to health care is critical in obstetrics. Yet obtaining reliable estimates of travel times to hospital for childbirth poses methodological challenges. We compared two measures of travel time, self-reported and calculated, to assess concordance and to identify determinants of long travel time to hospital for childbirth. Methods Data came from the 2010 French National Perinatal Survey, a national representative sample of births (N = 14 681). We compared both travel time measures by maternal, maternity unit and geographic characteristics in rural, peri-urban and urban areas. Logistic regression models were used to study factors associated with reported and calculated times ≥30 min. Cohen's kappa coefficients were also calculated to estimate the agreement between reported and calculated times according to women's characteristics. Results In urban areas, the proportion of women with travel times ≥30 min was higher when reported rather than calculated times were used (11.0 vs. 3.6%). Longer reported times were associated with non-French nationality [adjusted odds ratio (aOR) 1.3 (95% CI 1.0-1.7)] and inadequate prenatal care [aOR 1.5 (95% CI 1.2-2.0)], but not for calculated times. Concordance between the two measures was higher in peri-urban and rural areas (52.4 vs. 52.3% for rural areas). Delivery in a specialised level 2 or 3 maternity unit was a principal determinant of long reported and measured times in peri-urban and rural areas. Conclusions for Practice The level of agreement between reported and calculated times varies according to geographic context. Poor measurement of travel time in urban areas may mask problems in accessibility.

  7. Some approximate calculations in SU2 lattice mean field theory

    International Nuclear Information System (INIS)

    Hari Dass, N.D.; Lauwers, P.G.

    1981-12-01

    Approximate calculations are performed for small Wilson loops of SU 2 lattice gauge theory in mean field approximation. Reasonable agreement is found with Monte Carlo data. Ways of improving these calculations are discussed. (Auth.)

  8. Adaptation of GRS calculation codes for Soviet reactors

    International Nuclear Information System (INIS)

    Langenbuch, S.; Petri, A.; Steinborn, J.; Stenbok, I.A.; Suslow, A.I.

    1994-01-01

    The use of ATHLET for incident calculation of WWER has been tested and verified in numerous calculations. Further adaptation may be needed for the WWER 1000 plants. Coupling ATHLET with the 3D nuclear model BIPR-8 for WWER cores clearly improves studies of the influence of neutron kinetics. In the case of FBMK reactors ATHLET calculations show that typical incidents in the complex RMBK reactors can be calculated even though verification still has to be worked on. Results of the 3D-core model QUABOX/CUBBOX-HYCA show good correlation of calculated and measured values in reactor plants. Calculations carried out to date were used to check essential parameters influencing RBMK core behaviour especially dependence of effective voidre activity on the number of control rods. (orig./HP) [de

  9. Development and application of advanced methods for electronic structure calculations

    DEFF Research Database (Denmark)

    Schmidt, Per Simmendefeldt

    . For this reason, part of this thesis relates to developing and applying a new method for constructing so-called norm-conserving PAW setups, that are applicable to GW calculations by using a genetic algorithm. The effect of applying the new setups significantly affects the absolute band positions, both for bulk......This thesis relates to improvements and applications of beyond-DFT methods for electronic structure calculations that are applied in computational material science. The improvements are of both technical and principal character. The well-known GW approximation is optimized for accurate calculations...... of electronic excitations in two-dimensional materials by exploiting exact limits of the screened Coulomb potential. This approach reduces the computational time by an order of magnitude, enabling large scale applications. The GW method is further improved by including so-called vertex corrections. This turns...

  10. Combination of physical exercise and adenosine improves accuracy of automatic calculation of stress LVEF in gated SPECT using QGS software

    International Nuclear Information System (INIS)

    Tehranipour, N.; AL-Nahhas, A.; Towey, D.

    2005-01-01

    Combining exercise and adenosine during the stress phase of myocardial perfusion imaging (MPI) is known to reduce adverse effects and improve image quality. The aim of this study was to assess whether it can also improve the automatic calculation of left ventricular ejection fraction (LVEF) by QGS software package, during the stress phase of Gated SPECT. One hundred patients who had stress Gated SPECT were retrospectively included in this study. Gated data of those who had adenosine only (50 patients = group A) was compared with those obtained in another group of 50 patients who had added bicycle exercise (Group B). All had identical image acquisition protocol using 99mT c-tetrofosmine. Clinical adverse effects, changes in blood pressure (BP), heart rate (HR), and ECG were monitored. Visual assessment of subdiaphragmatic uptake and accuracy of automatic regions of interest (ROI's) drawn by the software were noted. Regions of interest that involved sub-diaphragmatic uptake and resulting in low LVEF were manually adjusted to include the left ventricle only, and the frequency of manual adjustment was noted. No significant difference was noted in age, sex, baseline BP and HR between groups A and B. Adverse effects occurred less often in group B compared to group A (12% vs. 24%, p = 0.118). Maximum HR and BP achieved during stress were significantly higher in group B compared to group A (p 0.025, p = 0.001 respectively). The number of patients who had faulty ROI's and low LVEF, who needed manual adjustment of ROI.s, were higher in group A compared to group B (16% vs. 6%, p = 0.025). The values of LVEF showed significant improvement following manual adjustment of ROI's, increasing from a mean of 19.63 ± 15.96 to 62.13 ± 7.55 (p = 0.0001) and from 17.33 ± 9.5 to 49.67 ± 7.7 (p = 0.0014) in groups A and B respectively. The addition of exercise to adenosine significantly improves the automatic calculation of LVEF by QGS software during Gated SPECT and reduces the need

  11. Effectiveness of a computer based medication calculation education and testing programme for nurses.

    Science.gov (United States)

    Sherriff, Karen; Burston, Sarah; Wallis, Marianne

    2012-01-01

    The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  12. An improvement of the filter diagonalization-based post-processing method applied to finite difference time domain calculations of three-dimensional phononic band structures

    International Nuclear Information System (INIS)

    Su Xiaoxing; Zhang Chuanzeng; Ma Tianxue; Wang Yuesheng

    2012-01-01

    When three-dimensional (3D) phononic band structures are calculated by using the finite difference time domain (FDTD) method with a relatively small number of iterations, the results can be effectively improved by post-processing the FDTD time series (FDTD-TS) based on the filter diagonalization method (FDM), instead of the classical fast Fourier transform. In this paper, we propose a way to further improve the performance of the FDM-based post-processing method by introducing a relatively large number of observing points to record the FDTD-TS. To this end, the existing scheme of FDTD-TS preprocessing is modified. With the new preprocessing scheme, the processing efficiency of a single FDTD-TS can be improved significantly, and thus the entire post-processing method can have sufficiently high efficiency even when a relatively large number of observing points are used. The feasibility of the proposed method for improvement is verified by the numerical results.

  13. Reducing post-tonsillectomy haemorrhage rates through a quality improvement project using a Swedish National quality register: a case study.

    Science.gov (United States)

    Odhagen, Erik; Sunnergren, Ola; Söderman, Anne-Charlotte Hessén; Thor, Johan; Stalfors, Joacim

    2018-03-24

    Tonsillectomy (TE) is one of the most frequently performed ENT surgical procedures. Post-tonsillectomy haemorrhage (PTH) is a potentially life-threatening complication of TE. The National Tonsil Surgery Register in Sweden (NTSRS) has revealed wide variations in PTH rates among Swedish ENT centres. In 2013, the steering committee of the NTSRS, therefore, initiated a quality improvement project (QIP) to decrease the PTH incidence. The aim of the present study was to describe and evaluate the multicentre QIP initiated to decrease PTH rates. Six ENT centres, all with PTH rates above the Swedish average, participated in the 7-month quality improvement project. Each centre developed improvement plans describing the intended changes in clinical practice. The project's primary outcome variable was the PTH rate. Process indicators, such as surgical technique, were also documented. Data from the QIP centres were compared with a control group of 15 surgical centres in Sweden with similarly high PTH rates. Data from both groups for the 12 months prior to the start of the QIP were compared with data for the 12 months after the QIP. The QIP centres reduced the PTH rate from 12.7 to 7.1% from pre-QIP to follow-up; in the control group, the PTH rate remained unchanged. The QIP centres also exhibited positive changes in related key process indicators, i.e., increasing the use of cold techniques for dissection and haemostasis. The rates of PTH can be reduced with a QIP. A national quality register can be used not only to identify areas for improvement but also to evaluate the impact of subsequent improvement efforts and thereby guide professional development and enhance patient outcomes.

  14. National kidney dialysis and transplant registries in Latin America: how to implement and improve them

    Directory of Open Access Journals (Sweden)

    María Carlota González-Bedat

    2015-09-01

    Full Text Available The Strategic Plan of the Pan American Health Organization, 2014-2019, Championing Health: Sustainable Development and Equityrecognizes that "Chronic kidney disease, caused mainly by complications of diabetes and hypertension, has increased in the Region." This Plan includes the first concrete goal on chronic kidney disease: to achieve a prevalence rate for renal replacement therapy of at least 700 patients per million population by 2019. National dialysis and transplant registries (DTR are a useful tool for epidemiological research, health care planning, and quality improvement. Their success depends on the quality of their data and quality control procedures. This article describes the current situation of national DTRs in the Region and the content of their information and health indicators, and it offers recommendations for creating and maintaining them. It points to their heterogeneity or absence in some countries, in line with the inequities that patients face in access to renal replacement therapy. The complete lack of information in Caribbean countries prevents their inclusion in this communication, which requires immediate attention.

  15. Calculator-Controlled Robots: Hands-On Mathematics and Science Discovery

    Science.gov (United States)

    Tuchscherer, Tyson

    2010-01-01

    The Calculator Controlled Robots activities are designed to engage students in hands-on inquiry-based missions. These activities address National science and technology standards, as well as specifically focusing on mathematics content and process standards. There are ten missions and three exploration extensions that provide activities for up to…

  16. A Novel Hydro-information System for Improving National Weather Service River Forecast System

    Science.gov (United States)

    Nan, Z.; Wang, S.; Liang, X.; Adams, T. E.; Teng, W. L.; Liang, Y.

    2009-12-01

    A novel hydro-information system has been developed to improve the forecast accuracy of the NOAA National Weather Service River Forecast System (NWSRFS). An MKF-based (Multiscale Kalman Filter) spatial data assimilation framework, together with the NOAH land surface model, is employed in our system to assimilate satellite surface soil moisture data to yield improved evapotranspiration. The latter are then integrated into the distributed version of the NWSRFS to improve its forecasting skills, especially for droughts, but also for disaster management in general. Our system supports an automated flow into the NWSRFS of daily satellite surface soil moisture data, derived from the TRMM Microwave Imager (TMI) and Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E), and the forcing information of the North American Land Data Assimilation System (NLDAS). All data are custom processed, archived, and supported by the NASA Goddard Earth Sciences Data Information and Services Center (GES DISC). An optional data fusing component is available in our system, which fuses NEXRAD Stage III precipitation data with the NLDAS precipitation data, using the MKF-based framework, to provide improved precipitation inputs. Our system employs a plug-in, structured framework and has a user-friendly, graphical interface, which can display, in real-time, the spatial distributions of assimilated state variables and other model-simulated information, as well as their behaviors in time series. The interface can also display watershed maps, as a result of the integration of the QGIS library into our system. Extendibility and flexibility of our system are achieved through the plug-in design and by an extensive use of XML-based configuration files. Furthermore, our system can be extended to support multiple land surface models and multiple data assimilation schemes, which would further increase its capabilities. Testing of the integration of the current system into the NWSRFS is

  17. Improvement of neutronic calculations on a Masurca core using adaptive mesh refinement capabilities

    International Nuclear Information System (INIS)

    Fournier, D.; Archier, P.; Le Tellier, R.; Suteau, C.

    2011-01-01

    The simulation of 3D cores with homogenized assemblies in transport theory remains time and memory consuming for production calculations. With a multigroup discretization for the energy variable and a discrete ordinate method for the angle, a system of about 10"4 coupled hyperbolic transport equations has to be solved. For these equations, we intend to optimize the spatial discretization. In the framework of the SNATCH solver used in this study, the spatial problem is dealt with by using a structured hexahedral mesh and applying a Discontinuous Galerkin Finite Element Method (DGFEM). This paper shows the improvements due to the development of Adaptive Mesh Refinement (AMR) methods. As the SNATCH solver uses a hierarchical polynomial basis, p−refinement is possible but also h−refinement thanks to non conforming capabilities. Besides, as the flux spatial behavior is highly dependent on the energy, we propose to adapt differently the spatial discretization according to the energy group. To avoid dealing with too many meshes, some energy groups are joined and share the same mesh. The different energy-dependent AMR strategies are compared to each other but also with the classical approach of a conforming and highly refined spatial mesh. This comparison is carried out on different quantities such as the multiplication factor, the flux or the current. The gain in time and memory is shown for 2D and 3D benchmarks coming from the ZONA2B experimental core configuration of the MASURCA mock-up at CEA Cadarache. (author)

  18. TTS-Polttopuu - cost calculation model for fuelwood

    International Nuclear Information System (INIS)

    Naett, H.; Ryynaenen, S.

    1999-01-01

    The TTS-Institutes's Forestry Department has developed a computer based cost-calculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation, chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486- level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY-research programme. (orig.)

  19. Improved Spectral Calculations for Discrete Schrődinger Operators

    Science.gov (United States)

    Puelz, Charles

    This work details an O(n2) algorithm for computing spectra of discrete Schrődinger operators with periodic potentials. Spectra of these objects enhance our understanding of fundamental aperiodic physical systems and contain rich theoretical structure of interest to the mathematical community. Previous work on the Harper model led to an O(n2) algorithm relying on properties not satisfied by other aperiodic operators. Physicists working with the Fibonacci Hamiltonian, a popular quasicrystal model, have instead used a problematic dynamical map approach or a sluggish O(n3) procedure for their calculations. The algorithm presented in this work, a blend of well-established eigenvalue/vector algorithms, provides researchers with a more robust computational tool of general utility. Application to the Fibonacci Hamiltonian in the sparsely studied intermediate coupling regime reveals structure in canonical coverings of the spectrum that will prove useful in motivating conjectures regarding band combinatorics and fractal dimensions.

  20. Web Application for Actuarial Calculations for Insurance

    OpenAIRE

    Dobrev, Hristo; Kyurkchiev, Nikolay

    2013-01-01

    Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013 During the last 10 years a growing interest in the modernization of vocational education of actuaries, the content of actuarial study programs, consistent with global traditions and trends is indicated. Web application for insurance actuarial calculations is explored. Association for the Development of the Information Society, Institute of Mathematics and...

  1. Improvement of Cost Calculation in Constructions – Application of the Standard Cost Method

    Directory of Open Access Journals (Sweden)

    Adela Breuer

    2010-12-01

    Full Text Available Grace to the analysis of several commercial companies effectively performed “on the field”, we could remark the necessity to change the method of cost calculation, our motivation being related to the simplification of calculations and the reduction of the labour volume, but especially the necessity to know in due time the deviations occurred as well as the causes having led to their apparition. The importance of knowing the deviations in due time results from the very basic characteristics of the constructions execution, i.e. the performance of works during several budgetary years, which leads to the modifications of prices and materials, the introduction of new technologies, and to the performance of open air activities, making the execution of constructions works be influenced by the atmospheric condition. But the most important aspect of knowing the deviations is the correct determination of expenses and their inscribing in the corresponding period, in view of determining the result of the budgetary year. Our proposal for the enhancement of the method of cost calculation in constructions is the application of the standard cost method in the variant “single standard cost”.

  2. Calculating the heat transfer coefficient of frame profiles with internal cavities

    DEFF Research Database (Denmark)

    Noyé, Peter Anders; Laustsen, Jacob Birck; Svendsen, Svend

    2004-01-01

    . The heat transfer coefficient is determined by two-dimensional numerical calculations and by measurements. Calculations are performed in Therm (LBNL (2001)), which is developed at Lawrence Berkeley National Laboratory, USA. The calculations are performed in accordance with the future European standards...... correspondence between measured and calculated values. Hence, when determining the heat transfer coefficient of frame profiles with internal cavities by calculations, it is necessary to apply a more detailed radiation exchange model than described in the prEN ISO 10077-2 standard. The ISO-standard offers......Determining the energy performance of windows requires detailed knowledge of the thermal properties of their different elements. A series of standards and guidelines exist in this area. The thermal properties of the frame can be determined either by detailed two-dimensional numerical methods...

  3. Validation of calculational methods for nuclear criticality safety - approved 1975

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    The American National Standard for Nuclear Criticality Safety in Operations with Fissionable Materials Outside Reactors, N16.1-1975, states in 4.2.5: In the absence of directly applicable experimental measurements, the limits may be derived from calculations made by a method shown to be valid by comparison with experimental data, provided sufficient allowances are made for uncertainties in the data and in the calculations. There are many methods of calculation which vary widely in basis and form. Each has its place in the broad spectrum of problems encountered in the nuclear criticality safety field; however, the general procedure to be followed in establishing validity is common to all. The standard states the requirements for establishing the validity and area(s) of applicability of any calculational method used in assessing nuclear criticality safety

  4. Improved spectral absorption coefficient grouping strategy of wide band k-distribution model used for calculation of infrared remote sensing signal of hot exhaust systems

    Science.gov (United States)

    Hu, Haiyang; Wang, Qiang

    2018-07-01

    A new strategy for grouping spectral absorption coefficients, considering the influences of both temperature and species mole ratio inhomogeneities on correlated-k characteristics of the spectra of gas mixtures, has been deduced to match the calculation method of spectral overlap parameter used in multiscale multigroup wide band k-distribution model. By comparison with current spectral absorption coefficient grouping strategies, for which only the influence of temperature inhomogeneity on the correlated-k characteristics of spectra of single species was considered, the improvements in calculation accuracies resulting from the new grouping strategy were evaluated using a series of 0D cases in which radiance under 3-5-μm wave band emitted by hot combustion gas of hydrocarbon fuel was attenuated by atmosphere with quite different temperature and mole ratios of water vapor and carbon monoxide to carbon dioxide. Finally, evaluations are presented on the calculation of remote sensing thermal images of transonic hot jet exhausted from a chevron ejecting nozzle with solid wall cooling system.

  5. Still making progress to improve the hospital workplace environment? Results from the 2008 National Survey of Registered Nurses.

    Science.gov (United States)

    Buerhaus, Peter I; DesRoches, Catherine; Donelan, Karen; Hess, Robert

    2009-01-01

    Despite the majority of RNs perceiving a shortage of nurses, findings from the 2008 National Survey of RNs indicate the hospital workplace improved in several areas compared to a 2006 survey. Improvements included the time RNs spend with patients, quality of nursing care, and a decreasing impact of the shortage on delaying nurses' responses to pages or calls, staff communication, patients' wait time for surgery, and timeliness and efficiency of care. Areas the environment was perceived to have worsened included overtime hours, sexual harassment/hostile, and physical violence. RNs hold mixed views about the consequences of reporting errors and mistakes with a majority agreeing that reporting them had led to positive changes to prevent future errors, but that mistakes were held against them. Overall, results suggest that hospital managers can be reassured that their efforts to improve the workplace environment are having their intended effect but, at the same time, important areas for improvement remain.

  6. Positron collisions with acetylene calculated using the R-matrix with pseudo-states method

    Science.gov (United States)

    Zhang, Rui; Galiatsatos, Pavlos G.; Tennyson, Jonathan

    2011-10-01

    Eigenphase sums, total cross sections and differential cross sections are calculated for low-energy collisions of positrons with C2H2. The calculations demonstrate that the use of appropriate pseudo-state expansions very significantly improves the representation of this process giving both realistic eigenphases and cross sections. Differential cross sections are strongly forward peaked in agreement with the measurements. These calculations are computationally very demanding; even with improved procedures for matrix diagonalization, fully converged calculations are too expensive with current computer resources. Nonetheless, the calculations show clear evidence for the formation of a virtual state but no indication that acetylene actually binds a positron at its equilibrium geometry.

  7. Positron collisions with acetylene calculated using the R-matrix with pseudo-states method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Rui; Galiatsatos, Pavlos G; Tennyson, Jonathan, E-mail: j.tennyson@ucl.ac.uk [Department of Physics and Astronomy, University College London, Gower St., London WC1E 6BT (United Kingdom)

    2011-10-14

    Eigenphase sums, total cross sections and differential cross sections are calculated for low-energy collisions of positrons with C{sub 2}H{sub 2}. The calculations demonstrate that the use of appropriate pseudo-state expansions very significantly improves the representation of this process giving both realistic eigenphases and cross sections. Differential cross sections are strongly forward peaked in agreement with the measurements. These calculations are computationally very demanding; even with improved procedures for matrix diagonalization, fully converged calculations are too expensive with current computer resources. Nonetheless, the calculations show clear evidence for the formation of a virtual state but no indication that acetylene actually binds a positron at its equilibrium geometry.

  8. Modeling of water lighting process and calculation of the reactor-clarifier to improve energy efficiency

    Science.gov (United States)

    Skolubovich, Yuriy; Skolubovich, Aleksandr; Voitov, Evgeniy; Soppa, Mikhail; Chirkunov, Yuriy

    2017-10-01

    The article considers the current questions of technological modeling and calculation of the new facility for cleaning natural waters, the clarifier reactor for the optimal operating mode, which was developed in Novosibirsk State University of Architecture and Civil Engineering (SibSTRIN). A calculation technique based on well-known dependences of hydraulics is presented. A calculation example of a structure on experimental data is considered. The maximum possible rate of ascending flow of purified water was determined, based on the 24 hour clarification cycle. The fractional composition of the contact mass was determined with minimal expansion of contact mass layer, which ensured the elimination of stagnant zones. The clarification cycle duration was clarified by the parameters of technological modeling by recalculating maximum possible upward flow rate of clarified water. The thickness of the contact mass layer was determined. Likewise, clarification reactors can be calculated for any other lightening conditions.

  9. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    Science.gov (United States)

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  10. The effect of an interactive e-drug calculations package on nursing students' drug calculation ability and self-efficacy.

    Science.gov (United States)

    McMullan, Miriam; Jones, Ray; Lea, Susan

    2011-06-01

    Nurses need to be competent and confident in performing drug calculations to ensure patient safety. The purpose of this study is to compare an interactive e-drug calculations package, developed using Cognitive Load Theory as its theoretical framework, with traditional handout learning support on nursing students' drug calculation ability, self-efficacy and support material satisfaction. A cluster randomised controlled trial comparing the e-package with traditional handout learning support was conducted with a September cohort (n=137) and a February cohort (n=92) of second year diploma nursing students. Students from each cohort were geographically dispersed over 3 or 4 independent sites. Students from each cohort were invited to participate, halfway through their second year, before and after a 12 week clinical practice placement. During their placement the intervention group received the e-drug calculations package while the control group received traditional 'handout' support material. Drug calculation ability and self-efficacy tests were given to the participants pre- and post-intervention. Participants were given the support material satisfaction scale post-intervention. Students in both cohorts randomised to e-learning were more able to perform drug calculations than those receiving the handout (September: mean 48.4% versus 34.7%, p=0.027; February: mean 47.6% versus 38.3%, p=0.024). February cohort students using the e-package were more confident in performing drug calculations than those students using handouts (self-efficacy mean 56.7% versus 45.8%, p=0.022). There was no difference in improved self-efficacy between intervention and control for students in the September cohort. Students who used the package were more satisfied with its use than the students who used the handout (mean 29.6 versus 26.5, p=0.001), particularly with regard to the package enhancing their learning (p=0.023), being an effective way to learn (p=0.005), providing practice and

  11. Loop Ileostomy Closure as an Overnight Procedure: Institutional Comparison With the National Surgical Quality Improvement Project Data Set.

    Science.gov (United States)

    Berger, Nicholas G; Chou, Raymond; Toy, Elliot S; Ludwig, Kirk A; Ridolfi, Timothy J; Peterson, Carrie Y

    2017-08-01

    Enhanced recovery pathways have decreased length of stay after colorectal surgery. Loop ileostomy closure remains a challenge, because patients experience high readmission rates, and validation of enhanced recovery pathways has not been demonstrated. This study examined a protocol whereby patients were discharged on the first postoperative day and instructed to advance their diet at home with close telephone follow-up. The hypothesis was that patients can be safely discharged the day after loop closure, leading to shorter length of stay without increased rates of readmission or complications. Patients undergoing loop ileostomy closure were queried from the American College of Surgeons National Surgical Quality Improvement Project and compared with a single institution (2012-2015). Length of stay, 30-day readmission, and 30-day morbidity data were analyzed. The study was conducted at a tertiary university department. The study includes 1602 patients: 1517 from the National Surgical Quality Improvement Project database and 85 from a single institution. Length of stay and readmission rates were measured. Median length of stay was less at the single institution compared with control (2 vs 4 d; p < 0.001). Thirty-day readmission (15.3% vs 10.4%; p = 0.15) and overall 30-day complications (15.3% vs 16.7%; p = 0.73) were similar between cohorts. Estimated adjusted length of stay was less in the single institution (2.93 vs 5.58 d; p < 0.0001). There was no difference in the odds of readmission (p = 0.22). The main limitations of this study include its retrospective nature and limitations of the National Surgical Quality Improvement Program database. Next-day discharge with protocoled diet advancement and telephone follow-up is acceptable after loop ileostomy closure. Patients can benefit from decreased length of stay without an increase in readmission or complications. This has the potential to change the practice of postoperative management of loop ileostomy closure, as

  12. Efficient pseudospectral methods for density functional calculations

    International Nuclear Information System (INIS)

    Murphy, R. B.; Cao, Y.; Beachy, M. D.; Ringnalda, M. N.; Friesner, R. A.

    2000-01-01

    Novel improvements of the pseudospectral method for assembling the Coulomb operator are discussed. These improvements consist of a fast atom centered multipole method and a variation of the Head-Gordan J-engine analytic integral evaluation. The details of the methodology are discussed and performance evaluations presented for larger molecules within the context of DFT energy and gradient calculations. (c) 2000 American Institute of Physics

  13. National Clinical Skills Competition: an effective simulation-based method to improve undergraduate medical education in China

    Directory of Open Access Journals (Sweden)

    Guanchao Jiang

    2016-02-01

    Full Text Available Background: The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Methods: Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. Results: The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%, updated the curriculum model and instructional methods (79.8%, strengthened faculty development (84.0%, improved educational resources (82.1%, and benefited all students (53.4%. Conclusions: The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China.

  14. Calculations of the self-amplified spontaneous emission performance of a free-electron laser

    International Nuclear Information System (INIS)

    Dejus, R. J.

    1999-01-01

    The linear integral equation based computer code (RON: Roger Oleg Nikolai), which was recently developed at Argonne National Laboratory, was used to calculate the self-amplified spontaneous emission (SASE) performance of the free-electron laser (FEL) being built at Argonne. Signal growth calculations under different conditions are used for estimating tolerances of actual design parameters. The radiation characteristics are discussed, and calculations using an ideal undulator magnetic field and a real measured magnetic field will be compared and discussed

  15. Comparison of measured and calculated radiation doses in granite around emplacement holes in the spent-fuel test: Climax, Nevada Test Site

    International Nuclear Information System (INIS)

    van Konynenburg, R.A.

    1982-01-01

    Lawrence Livermore National Laboratory (LLNL) has emplaced eleven spent nuclear-reactor fuel assemblies in the Climax granite at the Nevada Test Site as part of the DOE Nevada Nuclear-Waste Storage Investigations. One of our objectives is to study radiation effects on the rock. The neutron and gamma-ray doses to the rock have been determined by MORSE-L Monte Carlo calculations and measurements using optical absorption and thermoluminescence dosimeters and metal foils. We compare the results to date. Generally, good agreement is found in the spatial and time dependence of the doses, but some of the absolute dose results appear to differ by more than the expected uncertainties. Although the agreement is judged to be adequate for radiation effects studies, suggestions for improving the precision of the calculations and measurements are made

  16. Resonance self-shielding calculation with regularized random ladders

    Energy Technology Data Exchange (ETDEWEB)

    Ribon, P.

    1986-01-01

    The straightforward method for calculation of resonance self-shielding is to generate one or several resonance ladders, and to process them as resolved resonances. The main drawback of Monte Carlo methods used to generate the ladders, is the difficulty of reducing the dispersion of data and results. Several methods are examined, and it is shown how one (a regularized sampling method) improves the accuracy. Analytical methods to compute the effective cross-section have recently appeared: they are basically exempt from dispersion, but are inevitably approximate. The accuracy of the most sophisticated one is checked. There is a neutron energy range which is improperly considered as statistical. An examination is presented of what happens when it is treated as statistical, and how it is possible to improve the accuracy of calculations in this range. To illustrate the results calculations have been performed in a simple case: nucleus /sup 238/U, at 300 K, between 4250 and 4750 eV.

  17. Economic Evaluation of Improved Irrigated Bread Wheat Varieties with National and International Origins and Its Impacts on Transfer of Supply Function

    Directory of Open Access Journals (Sweden)

    hormoz asadi

    2017-08-01

    research in different regions (B, annual shift of genetic improvement of variety due to breeding program (kt and Fixed and Variable costs due to research and extension of irrigated wheat breeding in experiment years (TVC were estimated as follow: Where: Pt: Price of wheat grain in year t, Qt: Quantity of wheat grain production in year t, Vit: Proportion of area of varieties in year t, gi: Genetic improvement for varieties i, S: Number of full time breeders and technicians involved in irrigated wheat breeding program, Cs: The costs of breeders and technicians in year t and Cvt: Fixed and variable costs of research and extension activities in year t For economic analyses of genetic improvement of a variety, profitability indexes including Net Present Value (NPV, Benefit- Cost Ratio and Internal Rate of Return (IRR were calculated following Brennan et al. (2002: Where: PVB: Present value of benefits, PVC: Present value of Costs, r: Discount rate and n: period Results and discussion According to the results of this study, the mean reduction of costs for released irrigated bread wheat varieties during 2001- 2010 was estimated as 109.8 rials. The mean of shift in supply function for released irrigated bread wheat varieties during these years was estimated as %5.31. The mean of Net present value (NPV for released irrigated bread wheat varieties in study periods was estimated 4463.5 billion Iranian Rials. Based on results, breeding program of irrigated bread wheat was economic. Conclusion In general, released irrigated bread wheat varieties with national and international origins in irrigated wheat breeding program of Seed and Plant Improvement Institute (SPII, Iran, was economically profitable. New Released irrigated bread wheat varieties have had considerable impacts on cost reduction and increasing of farmers income. Acknowlegemnets Authors would like to extend their sincere acknowledgements to the University of Sistan and Baluchestan and Seed and Plant Improvement Institute

  18. Calculation of Gilbert damping in ferromagnetic films

    Directory of Open Access Journals (Sweden)

    Edwards D. M.

    2013-01-01

    Full Text Available The Gilbert damping constant in the phenomenological Landau-Lifshitz-Gilbert equation which describes the dynamics of magnetization, is calculated for Fe, Co and Ni bulk ferromagnets, Co films and Co/Pd bilayers within a nine-band tight-binding model with spin-orbit coupling included. The calculational effciency is remarkably improved by introducing finite temperature into the electronic occupation factors and subsequent summation over the Matsubara frequencies. The calculated dependence of Gilbert damping constant on scattering rate for bulk Fe, Co and Ni is in good agreement with the results of previous ab initio calculations. Calculations are reported for ferromagnetic Co metallic films and Co/Pd bilayers. The dependence of the Gilbert damping constant on Co film thickness, for various scattering rates, is studied and compared with recent experiments.

  19. CO2calc: A User-Friendly Seawater Carbon Calculator for Windows, Mac OS X, and iOS (iPhone)

    Science.gov (United States)

    Robbins, L.L.; Hansen, M.E.; Kleypas, J.A.; Meylan, S.C.

    2010-01-01

    A user-friendly, stand-alone application for the calculation of carbonate system parameters was developed by the U.S. Geological Survey Florida Shelf Ecosystems Response to Climate Change Project in response to its Ocean Acidification Task. The application, by Mark Hansen and Lisa Robbins, USGS St. Petersburg, FL, Joanie Kleypas, NCAR, Boulder, CO, and Stephan Meylan, Jacobs Technology, St. Petersburg, FL, is intended as a follow-on to CO2SYS, originally developed by Lewis and Wallace (1998) and later modified for Microsoft Excel? by Denis Pierrot (Pierrot and others, 2006). Besides eliminating the need for using Microsoft Excel on the host system, CO2calc offers several improvements on CO2SYS, including: An improved graphical user interface for data entry and results Additional calculations of air-sea CO2 fluxes (for surface water calculations) The ability to tag data with sample name, comments, date, time, and latitude/longitude The ability to use the system time and date and latitude/ longitude (automatic retrieval of latitude and longitude available on iPhone? 3, 3GS, 4, and Windows? hosts with an attached National Marine Electronics Association (NMEA)-enabled GPS) The ability to process multiple files in a batch processing mode An option to save sample information, data input, and calculated results as a comma-separated value (CSV) file for use with Microsoft Excel, ArcGIS,? or other applications An option to export points with geographic coordinates as a KMZ file for viewing and editing in Google EarthTM

  20. A tool to determine financial impact of adverse events in health care: healthcare quality calculator.

    Science.gov (United States)

    Yarbrough, Wendell G; Sewell, Andrew; Tickle, Erin; Rhinehardt, Eric; Harkleroad, Rod; Bennett, Marc; Johnson, Deborah; Wen, Li; Pfeiffer, Matthew; Benegas, Manny; Morath, Julie

    2014-12-01

    Hospital leaders lack tools to determine the financial impact of poor patient outcomes and adverse events. To provide health-care leaders with decision support for investments to improve care, we created a tool, the Healthcare Quality Calculator (HQCal), which uses institution-specific financial data to calculate impact of poor patient outcomes or quality improvement on present and future margin. Excel and Web-based versions of the HQCal were based on a cohort study framework and created with modular components including major drivers of cost and reimbursement. The Healthcare Quality Calculator (HQCal) compares payment, cost, and profit/loss for patients with and without poor outcomes or quality issues. Cost and payment information for groups with and without quality issues are used by the HQCal to calculate profit or loss. Importantly, institution-specific payment and cost data are used to calculate financial impact and attributable cost associated with poor patient outcomes, adverse events, or quality issues. Because future cost and reimbursement changes can be forecast, the HQCal incorporates a forward-looking component. The flexibility of the HQCal was demonstrated using surgical site infections after abdominal surgery and postoperative surgical airway complications. The Healthcare Quality Calculator determines financial impact of poor patient outcomes and the benefit of initiatives to improve quality. The calculator can identify quality issues that would provide the largest financial benefit if improved; however, it cannot identify specific interventions. The calculator provides a tool to improve transparency regarding both short- and long-term financial consequences of funding, or failing to fund, initiatives to close gaps in quality or improve patient outcomes.

  1. Linear filtering applied to Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Morrison, G.W.; Pike, D.H.; Petrie, L.M.

    1975-01-01

    A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

  2. Plasma boundary considerations for the national compact stellarator experiment

    International Nuclear Information System (INIS)

    Mioduszewski, P.; Grossman, A.; Fenstermacher, M.; Koniges, A.; Owen, L.; Rognlien, T.; Umansky, M.

    2003-01-01

    The national compact stellarator experiment (NCSX) [EPS 2001, Madeira, Portugal, 18-22 June 2001] is a new fusion project located at Princeton Plasma Physics Laboratory, Princeton, NJ. Plasma boundary control in stellarators has been shown to be very effective in improving plasma performance [EPS 2001, Madeira, Portugal, 18-22 June 2001] and, accordingly, will be an important element from the very beginning of the NCSX design. Plasma-facing components will be developed systematically according to our understanding of the NCSX boundary, with the eventual goal to develop a divertor with all the benefits for impurity and neutrals control. Neutrals calculations have been started to investigate the effect of neutrals penetration at various cross-sections

  3. New theoretical development for the calculating of physical properties of D2O

    International Nuclear Information System (INIS)

    Moreira, Osvaldo

    2011-01-01

    In this work we have developed a new method for calculating the physical properties of heavy water, D 2 O, using the Helmholtz free energy state function, A = U − T S, exclusively for this molecule. The state function has been calculated as ā = ā 0 +ā 1 (specific dimensionless values), where ā 0 is related to the properties of heavy water in gaseous state and ā 1 describes the liquid state. The canonical variables of the state function are absolute temperature and volume. To calculate the physical properties defining absolute pressure and temperature, here a variable change method was developed, based on the solution of a differential equation (function ζ) using numerical algorithms (scaling and Newton-Raphson). Physical quantities calculated are: density ϱ(specific volume υ), specific enthalpy h and entropy s. The results obtained agree completely with the values calculated by the National Institute of Standards and Technology (NIST). In this report it has also proposed an adjustment function to calculate the saturation absolute temperature of heavy water as a function of the pressure: T s (p) = exp[a·b(p)], where a is a vector of constant coefficients and b a vector function of pressure, using theoretical values and extending the wording proposed by the Oak Ridge National Laboratory. The new setting has an error less than 0.03%. (author)

  4. HISTORY OF NAVAL ARMOUR CALCULATION IN ROMANIA

    Directory of Open Access Journals (Sweden)

    KUMBETLIAN Garabet

    2014-09-01

    Full Text Available The article below describes the history of thick plate calculation in Romania and its impact and recognition by the Department of Defense-“DoD” (Executive Department of the Government of the United States of America. The DoD has three subordinated departments: Army, Navy and Air Force. In addition, there are many Defense Agencies, such as the Defense Advanced Research Projects Agency and schools, including the National Defense University [1].

  5. LLNL nuclear data libraries used for fusion calculations

    International Nuclear Information System (INIS)

    Howerton, R.J.

    1984-01-01

    The Physical Data Group of the Computational Physics Division of the Lawrence Livermore National Laboratory has as its principal responsibility the development and maintenance of those data that are related to nuclear reaction processes and are needed for Laboratory programs. Among these are the Magnetic Fusion Energy and the Inertial Confinement Fusion programs. To this end, we have developed and maintain a collection of data files or libraries. These include: files of experimental data of neutron induced reactions; an annotated bibliography of literature related to charged particle induced reactions with light nuclei; and four main libraries of evaluated data. We also maintain files of calculational constants developed from the evaluated libraries for use by Laboratory computer codes. The data used for fusion calculations are usually these calculational constants, but since they are derived by prescribed manipulation of evaluated data this discussion will describe the evaluated libraries

  6. Improvement of gamma-ray Sn transport calculations including coherent and incoherent scatterings and secondary sources of bremsstrahlung and fluorescence: Determination of gamma-ray buildup factors

    International Nuclear Information System (INIS)

    Kitsos, S.; Diop, C.M.; Assad, A.; Nimal, J.C.; Ridoux, P.

    1996-01-01

    Improvements of gamma-ray transport calculations in S n codes aim at taking into account the bound-electron effect of Compton scattering (incoherent), coherent scattering (Rayleigh), and secondary sources of bremsstrahlung and fluorescence. A computation scheme was developed to take into account these phenomena by modifying the angular and energy transfer matrices, and no modification in the transport code has been made. The incoherent and coherent scatterings as well as the fluorescence sources can be strictly treated by the transfer matrix change. For bremsstrahlung sources, this is possible if one can neglect the charged particles path as they pass through the matter (electrons and positrons) and is applicable for the energy range of interest for us (below 10 MeV). These improvements have been reported on the kernel attenuation codes by the calculation of new buildup factors. The gamma-ray buildup factors have been carried out for 25 natural elements up to 30 mean free paths in the energy range between 15 keV and 10 MeV

  7. Fast reactor calculational route for Pu burning core design

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, S. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-01-01

    This document provides a description of a calculational route, used in the Reactor Physics Research Section for sensitivity studies and initial design optimization calculations for fast reactor cores. The main purpose in producing this document was to provide a description of and user guides to the calculational methods, in English, as an aid to any future user of the calculational route who is (like the author) handicapped by a lack of literacy in Japanese. The document also provides for all users a compilation of information on the various parts of the calculational route, all in a single reference. In using the calculational route (to model Pu burning reactors) the author identified a number of areas where an improvement in the modelling of the standard calculational route was warranted. The document includes comments on and explanations of the modelling assumptions in the various calculations. Practical information on the use of the calculational route and the computer systems is also given. (J.P.N.)

  8. Optical Propagation Modeling for the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Williams, W H; Auerbach, J M; Henesian, M A; Jancaitis, K S; Manes, K R; Mehta, N C; Orth, C D; Sacks, R A; Shaw, M J; Widmayer, C C

    2004-01-12

    Optical propagation modeling of the National Ignition Facility has been utilized extensively from conceptual design several years ago through to early operations today. In practice we routinely (for every shot) model beam propagation starting from the waveform generator through to the target. This includes the regenerative amplifier, the 4-pass rod amplifier, and the large slab amplifiers. Such models have been improved over time to include details such as distances between components, gain profiles in the laser slabs and rods, transient optical distortions due to the flashlamp heating of laser slabs, measured transmitted and reflected wavefronts for all large optics, the adaptive optic feedback loop, and the frequency converter. These calculations allow nearfield and farfield predictions in good agreement with measurements.

  9. Medication calculation: the potential role of digital game-based learning in nurse education.

    Science.gov (United States)

    Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle

    2013-12-01

    Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.

  10. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  11. Update on Light-Ion Calculations

    International Nuclear Information System (INIS)

    Schultz, David R.

    2013-01-01

    During the time span of the CRP, calculations were (1) initiated extending previous work regarding elastic and transport cross sections relevant to light-species impurity-ion transport modeling, (2) completed for total and state-selective charge transfer (C 5+ , N 6+ , O 6+ , O 7+ + H; C 5+ , C 6+ , O 7+ , O 8+ + He; and C 6+ + H, H 2 ) for diagnostics such as charge exchange recombination spectroscopy, and (3) completed for excitation of atomic hydrogen by ion impact (H + , He 2+ , Be 4+ , C 6+ ) for diagnostics including beam emission spectroscopy and motional Stark effect spectroscopy. The first calculations undertaken were to continue work begun more than a decade ago providing plasma modelers with elastic total and differential cross sections, and related transport cross sections, used to model transport of hydrogen ions, atoms, and molecules as well as other species including intrinsic and extrinsic impurities. This body of work was reviewed in the course of reporting recent new calculations in a recent paper (P.S. Krstic and D.R. Schultz, Physics of Plasmas, 16, 053503 (2009)). After initial calculations for H + + O were completed, work was discontinued in light of other priorities. Charge transfer data for diagnostics provide important knowledge about the state of the plasma from the edge to the core and are therefore of significant interest to continually evaluate and improve. Further motivation for such calculations comes from recent and ongoing benchmark measurements of the total charge transfer cross section being made at Oak Ridge National Laboratory by C.C. Havener and collaborators. We have undertaken calculations using a variety of theoretical approaches, each applicable within a range of impact energies, that have led to the creation of a database of recommended state-selective and total cross sections composed of results from the various methods (MOCC, AOCC, CTMC, results from the literature) within their overlapping ranges of applicability

  12. SU-D-BRD-01: Cloud-Based Radiation Treatment Planning: Performance Evaluation of Dose Calculation and Plan Optimization

    International Nuclear Information System (INIS)

    Na, Y; Kapp, D; Kim, Y; Xing, L; Suh, T

    2014-01-01

    Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computational efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer Institute (1

  13. SU-D-BRD-01: Cloud-Based Radiation Treatment Planning: Performance Evaluation of Dose Calculation and Plan Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Na, Y; Kapp, D; Kim, Y; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Suh, T [Catholic UniversityMedical College, Seoul, Seoul (Korea, Republic of)

    2014-06-01

    Purpose: To report the first experience on the development of a cloud-based treatment planning system and investigate the performance improvement of dose calculation and treatment plan optimization of the cloud computing platform. Methods: A cloud computing-based radiation treatment planning system (cc-TPS) was developed for clinical treatment planning. Three de-identified clinical head and neck, lung, and prostate cases were used to evaluate the cloud computing platform. The de-identified clinical data were encrypted with 256-bit Advanced Encryption Standard (AES) algorithm. VMAT and IMRT plans were generated for the three de-identified clinical cases to determine the quality of the treatment plans and computational efficiency. All plans generated from the cc-TPS were compared to those obtained with the PC-based TPS (pc-TPS). The performance evaluation of the cc-TPS was quantified as the speedup factors for Monte Carlo (MC) dose calculations and large-scale plan optimizations, as well as the performance ratios (PRs) of the amount of performance improvement compared to the pc-TPS. Results: Speedup factors were improved up to 14.0-fold dependent on the clinical cases and plan types. The computation times for VMAT and IMRT plans with the cc-TPS were reduced by 91.1% and 89.4%, respectively, on average of the clinical cases compared to those with pc-TPS. The PRs were mostly better for VMAT plans (1.0 ≤ PRs ≤ 10.6 for the head and neck case, 1.2 ≤ PRs ≤ 13.3 for lung case, and 1.0 ≤ PRs ≤ 10.3 for prostate cancer cases) than for IMRT plans. The isodose curves of plans on both cc-TPS and pc-TPS were identical for each of the clinical cases. Conclusion: A cloud-based treatment planning has been setup and our results demonstrate the computation efficiency of treatment planning with the cc-TPS can be dramatically improved while maintaining the same plan quality to that obtained with the pc-TPS. This work was supported in part by the National Cancer Institute (1

  14. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    International Nuclear Information System (INIS)

    Palau, J.M.

    2005-01-01

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U 235 , U 238 , Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  15. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    Energy Technology Data Exchange (ETDEWEB)

    Palau, J M [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)

    2005-07-01

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  16. Validation of an online risk calculator for the prediction of anastomotic leak after colon cancer surgery and preliminary exploration of artificial intelligence-based analytics.

    Science.gov (United States)

    Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L

    2017-11-01

    Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.

  17. Feasibility study on embedded transport core calculations

    International Nuclear Information System (INIS)

    Ivanov, B.; Zikatanov, L.; Ivanov, K.

    2007-01-01

    The main objective of this study is to develop an advanced core calculation methodology based on embedded diffusion and transport calculations. The scheme proposed in this work is based on embedded diffusion or SP 3 pin-by-pin local fuel assembly calculation within the framework of the Nodal Expansion Method (NEM) diffusion core calculation. The SP 3 method has gained popularity in the last 10 years as an advanced method for neutronics calculation. NEM is a multi-group nodal diffusion code developed, maintained and continuously improved at the Pennsylvania State University. The developed calculation scheme is a non-linear iteration process, which involves cross-section homogenization, on-line discontinuity factors generation, and boundary conditions evaluation by the global solution passed to the local calculation. In order to accomplish the local calculation, a new code has been developed based on the Finite Elements Method (FEM), which is capable of performing both diffusion and SP 3 calculations. The new code will be used in the framework of the NEM code in order to perform embedded pin-by-pin diffusion and SP 3 calculations on fuel assembly basis. The development of the diffusion and SP 3 FEM code is presented first following by its application to several problems. Description of the proposed embedded scheme is provided next as well as the obtained preliminary results of the C3 MOX benchmark. The results from the embedded calculations are compared with direct pin-by-pin whole core calculations in terms of accuracy and efficiency followed by conclusions made about the feasibility of the proposed embedded approach. (authors)

  18. Approaches to reducing photon dose calculation errors near metal implants

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F., E-mail: sfkry@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Liu, Xinming [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Stingo, Francesco C. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)

    2016-09-15

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  19. Approaches to reducing photon dose calculation errors near metal implants

    International Nuclear Information System (INIS)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F.; Liu, Xinming; Stingo, Francesco C.

    2016-01-01

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  20. Readmission After Craniotomy for Tumor: A National Surgical Quality Improvement Program Analysis.

    Science.gov (United States)

    Dasenbrock, Hormuzdiyar H; Yan, Sandra C; Smith, Timothy R; Valdes, Pablo A; Gormley, William B; Claus, Elizabeth B; Dunn, Ian F

    2017-04-01

    Although readmission has become a common quality indicator, few national studies have examined this metric in patients undergoing cranial surgery. To utilize the prospective National Surgical Quality Improvement Program 2011-2013 registry to evaluate the predictors of unplanned 30-d readmission and postdischarge mortality after cranial tumor resection. Multivariable logistic regression was applied to screen predictors, which included patient age, sex, tumor location and histology, American Society of Anesthesiologists class, functional status, comorbidities, and complications from the index hospitalization. Of the 9565 patients included, 10.7% (n = 1026) had an unplanned readmission. Independent predictors of unplanned readmission were male sex, infratentorial location, American Society of Anesthesiologists class 3 designation, dependent functional status, a bleeding disorder, and morbid obesity (all P ≤ .03). Readmission was not associated with operative time, length of hospitalization, discharge disposition, or complications from the index admission. The most common reasons for readmission were surgical site infections (17.0%), infectious complications (11.0%), venous thromboembolism (10.0%), and seizures (9.4%). The 30-d mortality rate was 3.2% (n = 367), of which the majority (69.7%, n = 223) occurred postdischarge. Independent predictors of postdischarge mortality were greater age, metastatic histology, dependent functional status, hypertension, discharge to institutional care, and postdischarge neurological or cardiopulmonary complications (all P Readmissions were common after cranial tumor resection and often attributable to new postdischarge complications rather than exacerbations of complications from the initial hospitalization. Moreover, the majority of 30-d deaths occurred after discharge from the index hospitalization. The preponderance of postdischarge mortality and complications requiring readmission highlights the importance of posthospitalization

  1. Natural Propagation and Habitat Improvement Idaho: Lolo Creek and Upper Lochsa, Clearwater National Forest.

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa, F.A. Jr.; Lee, Kristine M.

    1991-01-01

    In 1983, the Clearwater National Forest and the Bonneville Power Administration (BPA) entered into a contractual agreement to improve anadromous fish habitat in selected tributaries of the Clearwater River Basin. This agreement was drawn under the auspices of the Northwest Power Act of 1980 and the Columbia River basin Fish and Wildlife Program (section 700). The Program was completed in 1990 and this document constitutes the Final Report'' that details all project activities, costs, accomplishments, and responses. The overall goal of the Program was to enhance spawning, rearing, and riparian habitats of Lolo Creek and major tributaries of the Lochsa River so that their production systems could reach full capability and help speed the recovery of salmon and steelhead within the basin.

  2. Natural propagation and habitat improvement Idaho: Lolo Creek and Upper Lochsa, Clearwater National Forest

    International Nuclear Information System (INIS)

    Espinosa, F.A. Jr.; Lee, K.M.

    1991-01-01

    In 1983, the Clearwater National Forest and the Bonneville Power Administration (BPA) entered into a contractual agreement to improve anadromous fish habitat in selected tributaries of the Clearwater River Basin. This agreement was drawn under the auspices of the Northwest Power Act of 1980 and the Columbia River basin Fish and Wildlife Program (section 700). The Program was completed in 1990 and this document constitutes the ''Final Report'' that details all project activities, costs, accomplishments, and responses. The overall goal of the Program was to enhance spawning, rearing, and riparian habitats of Lolo Creek and major tributaries of the Lochsa River so that their production systems could reach full capability and help speed the recovery of salmon and steelhead within the basin

  3. Transfer Area Mechanical Handling Calculation

    International Nuclear Information System (INIS)

    Dianda, B.

    2004-01-01

    This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC--28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC--28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use of these components or their

  4. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  5. Qualification of the calculational methods of the fluence in the pressurised water reactors. Improvement of the cross sections treatment by the probability table method

    International Nuclear Information System (INIS)

    Zheng, S.H.

    1994-01-01

    It is indispensable to know the fluence on the nuclear reactor pressure vessel. The cross sections and their treatment have an important rule to this problem. In this study, two ''benchmarks'' have been interpreted by the Monte Carlo transport program TRIPOLI to qualify the calculational method and the cross sections used in the calculations. For the treatment of the cross sections, the multigroup method is usually used but it exists some problems such as the difficulty to choose the weighting function and the necessity of a great number of energy to represent well the cross section's fluctuation. In this thesis, we propose a new method called ''Probability Table Method'' to treat the neutron cross sections. For the qualification, a program of the simulation of neutron transport by the Monte Carlo method in one dimension has been written; the comparison of multigroup's results and probability table's results shows the advantages of this new method. The probability table has also been introduced in the TRIPOLI program; the calculational results of the iron deep penetration benchmark has been improved by comparing with the experimental results. So it is interest to use this new method in the shielding and neutronic calculation. (author). 42 refs., 109 figs., 36 tabs

  6. Hybrid numerical calculation method for bend waveguides

    OpenAIRE

    Garnier , Lucas; Saavedra , C.; Castro-Beltran , Rigoberto; Lucio , José Luis; Bêche , Bruno

    2017-01-01

    National audience; The knowledge of how the light will behave in a waveguide with a radius of curvature becomes more and more important because of the development of integrated photonics, which include ring micro-resonators, phasars, and other devices with a radius of curvature. This work presents a numerical calculation method to determine the eigenvalues and eigenvectors of curved waveguides. This method is a hybrid method which uses at first conform transformation of the complex plane gene...

  7. A national evaluation of a dissemination and implementation initiative to enhance primary care practice capacity and improve cardiovascular disease care: the ESCALATES study protocol

    OpenAIRE

    Cohen, Deborah J.; Balasubramanian, Bijal A.; Gordon, Leah; Marino, Miguel; Ono, Sarah; Solberg, Leif I.; Crabtree, Benjamin F.; Stange, Kurt C.; Davis, Melinda; Miller, William L.; Damschroder, Laura J.; McConnell, K. John; Creswell, John

    2016-01-01

    Background The Agency for Healthcare Research and Quality (AHRQ) launched the EvidenceNOW Initiative to rapidly disseminate and implement evidence-based cardiovascular disease (CVD) preventive care in smaller primary care practices. AHRQ funded eight grantees (seven regional Cooperatives and one independent national evaluation) to participate in EvidenceNOW. The national evaluation examines quality improvement efforts and outcomes for more than 1500 small primary care practices (restricted to...

  8. Biomass conservation potential of pottery/ceramic lined Mamta Stove: An improved stove promoted under National Programme on Improved Cookstoves in India

    Energy Technology Data Exchange (ETDEWEB)

    George, R.; Yadla, V.L. [M.S. Univ. of Baroda, Vadodara (India). Home Management Dept.

    1995-10-01

    To combat biomass scarcity and ensure a cleaner cooking environment with less drudgery, among other things, a variety of improved stoves are promoted under National Programme on Improved Cookstoves (NPIC). Mamta Stove (MS) is one among such improved stoves. An indepth study was undertaken covering a sample of twenty-five rural families with the primary objective of assessing fuel saving potential of MS under field conditions through Kitchen Performance Test (KPT). Conventional stove (CS) used in almost all the families was shielded horse-shoe shaped stove with a negligible proportion using three stone open fire. Nearly 88% depended only on zero private cost fuels. The mean number of persons for whom the stoves were used on the days of field measurements in case of CS and MS were 5.6 and 5.7 respectively with an SD of 1.16 and standard adult equivalent (SAE) was approximately 4. Cooking pots included a concave roasting pan, a deep frying pan and flat bottomed pots. The mean daily fuel consumption on CS and MS were estimated to be 4.88 kg and 3.75 kg respective, thereby, resulting in fuel saving to the tune of 24% on MS. The paper discusses at length the design features of CS and MS, meal pattern, cooking habits, need for user training, consumerism in the area of cooking and stove technology, economics of switching over to MS and policy implications of commercialization of hitherto subsidized stove program. Further, salient characteristics of high and low cooking fuel consumers on MS are presented to bring to limelight their profile.

  9. An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters.

    Science.gov (United States)

    Behrens, F; Mackeben, M; Schröder-Preikschat, W

    2010-08-01

    This analysis of time series of eye movements is a saccade-detection algorithm that is based on an earlier algorithm. It achieves substantial improvements by using an adaptive-threshold model instead of fixed thresholds and using the eye-movement acceleration signal. This has four advantages: (1) Adaptive thresholds are calculated automatically from the preceding acceleration data for detecting the beginning of a saccade, and thresholds are modified during the saccade. (2) The monotonicity of the position signal during the saccade, together with the acceleration with respect to the thresholds, is used to reliably determine the end of the saccade. (3) This allows differentiation between saccades following the main-sequence and non-main-sequence saccades. (4) Artifacts of various kinds can be detected and eliminated. The algorithm is demonstrated by applying it to human eye movement data (obtained by EOG) recorded during driving a car. A second demonstration of the algorithm detects microsleep episodes in eye movement data.

  10. Wielandt acceleration for MCNP5 Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Brown, F.

    2007-01-01

    Monte Carlo criticality calculations use the power iteration method to determine the eigenvalue (k eff ) and eigenfunction (fission source distribution) of the fundamental mode. A recently proposed method for accelerating convergence of the Monte Carlo power iteration using Wielandt's method has been implemented in a test version of MCNP5. The method is shown to provide dramatic improvements in convergence rates and to greatly reduce the possibility of false convergence assessment. The method is effective and efficient, improving the Monte Carlo figure-of-merit for many problems. In addition, the method should eliminate most of the underprediction bias in confidence intervals for Monte Carlo criticality calculations. (authors)

  11. A study on making a long-term improvement in the national energy efficiency and GHG control plans by the AHP approach

    International Nuclear Information System (INIS)

    Lee, Seong Kon; Yoon, Yong Jin; Kim, Jong Wook

    2007-01-01

    Owing to the expiration of the national 10-year period plan and the establishment of an efficient energy and resource technology R and D system, the Korean government needs to make a strategic long-term national energy and resource technology R and D plan (NERP) to cope with forthcoming 10-year period. A new NERP aims to improve the energy intensity, reduce the emissions of greenhouse gas within the United Nations framework convention on climate change (UNFCCC), and contribute to the construction of an advanced economic system. We determine the priorities in technology development for the energy efficiency and greenhouse gas control plans (EGCP), which are parts of a new NERP, by using the AHP approach for the first time. We suggest a scientific procedure to determine the priorities in technology development by using AHP

  12. Improved physical fitness among older female participants in a nationally disseminated, community-based exercise program.

    Science.gov (United States)

    Seguin, Rebecca A; Heidkamp-Young, Eleanor; Kuder, Julia; Nelson, Miriam E

    2012-04-01

    Strength training (ST) is an important health behavior for aging women; it helps maintain strength and function and reduces risk for chronic diseases. This study assessed change in physical fitness following participation in a ST program implemented and evaluated by community leaders. The StrongWomen Program is a nationally disseminated, research-based, community ST program active in 40 states. The Senior Fitness Test is used to assess upper and lower body strength, upper and lower body flexibility, aerobic fitness, and agility; data are collected prior to and following program participation. For these analyses, five states provided deidentified data for 367 female participants, mean age 63 (±11) years. Attendance in approximately 10 weeks of twice-weekly classes was 69.4%. Paired t tests were used to analyze pre-post change. Significant improvements were observed (p age-group and compared with published, age-based norms. This study demonstrates that it is feasible for community leaders to conduct pre-post physical fitness evaluations with participants and that participants experienced improvements across several important domains of physical fitness.

  13. Implications of improved Higgs mass calculations for supersymmetric models.

    Science.gov (United States)

    Buchmueller, O; Dolan, M J; Ellis, J; Hahn, T; Heinemeyer, S; Hollik, W; Marrouche, J; Olive, K A; Rzehak, H; de Vries, K J; Weiglein, G

    We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, [Formula: see text], in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyse the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of [Formula: see text]and ATLAS searches for [Formula: see text] events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours [Formula: see text], though not in the NUHM1 or NUHM2.

  14. Implications of improved Higgs mass calculations for supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, O. [Imperial College, London (United Kingdom). High Energy Physics Group; Dolan, M.J. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States). Theory Group; Ellis, J. [King' s College, London (United Kingdom). Theoretical Particle Physics and Cosmology Group; and others

    2014-03-15

    We discuss the allowed parameter spaces of supersymmetric scenarios in light of improved Higgs mass predictions provided by FeynHiggs 2.10.0. The Higgs mass predictions combine Feynman-diagrammatic results with a resummation of leading and subleading logarithmic corrections from the stop/top sector, which yield a significant improvement in the region of large stop masses. Scans in the pMSSM parameter space show that, for given values of the soft supersymmetry-breaking parameters, the new logarithmic contributions beyond the two-loop order implemented in FeynHiggs tend to give larger values of the light CP-even Higgs mass, M{sub h}, in the region of large stop masses than previous predictions that were based on a fixed-order Feynman-diagrammatic result, though the differences are generally consistent with the previous estimates of theoretical uncertainties. We re-analyze the parameter spaces of the CMSSM, NUHM1 and NUHM2, taking into account also the constraints from CMS and LHCb measurements of BR(B{sub s}→μ{sup +}μ{sup -}) and ATLAS searches for E{sub T} events using 20/fb of LHC data at 8 TeV. Within the CMSSM, the Higgs mass constraint disfavours tan β

  15. A Case Study of Culturally Relevant School-Based Programming for First Nations Youth: Improved Relationships, Confidence and Leadership, and School Success

    Science.gov (United States)

    Crooks, Claire V.; Burleigh, Dawn; Snowshoe, Angela; Lapp, Andrea; Hughes, Ray; Sisco, Ashley

    2015-01-01

    Schools are expected to promote social and emotional learning skills among youth; however, there is a lack of culturally-relevant programming available. The Fourth R: Uniting Our Nations programs for Aboriginal youth include strengths-based programs designed to promote healthy relationships and cultural connectedness, and improve school success…

  16. Efficient methods for time-absorption (α) eigenvalue calculations

    International Nuclear Information System (INIS)

    Hill, T.R.

    1983-01-01

    The time-absorption eigenvalue (α) calculation is one of the options found in most discrete-ordinates transport codes. Several methods have been developed at Los Alamos to improve the efficiency of this calculation. Two procedures, based on coarse-mesh rebalance, to accelerate the α eigenvalue search are derived. A hybrid scheme to automatically choose the more-effective rebalance method is described. The α rebalance scheme permits some simple modifications to the iteration strategy that eliminates many unnecessary calculations required in the standard search procedure. For several fast supercritical test problems, these methods resulted in convergence with one-fifth the number of iterations required for the conventional eigenvalue search procedure

  17. Effect of the improvement of the HITRAN database on the radiative transfer calculation

    International Nuclear Information System (INIS)

    Feng Xuan; Zhao Fengsheng; Gao Wenhua

    2007-01-01

    The line parameters of the HITRAN 2004 have been updated, as compared with the older editions (the 2000 edition and the 1996 edition). In order to know the effect of the modifications on radiative transfer calculation with high spectral resolution, comparison in optical depth and radiance spectrum have been given between different editions. Four infrared spectral regions are selected, and they cover the three bands of atmospheric infrared sounder (AIRS) and one of geosynchronous imaging fourier transform spectrometer (GIFTS). The comparison has shown that the relative difference between HITRAN 2000 and 2004 and that between HITRAN 1996 and 2004 is decreasing. But the maximal discrepancy between the latest two editions in some spectral intervals is over 1%. It is important to estimate the error of calculation with the line parameters correctly or one has to use the new edition of HITRAN

  18. Advances in supercell calculation methods and comparison with measurements

    Energy Technology Data Exchange (ETDEWEB)

    Arsenault, B [Atomic Energy of Canada Limited, Mississauga, Ontario (Canada); Baril, R; Hotte, G [Hydro-Quebec, Central Nucleaire Gentilly, Montreal, Quebec (Canada)

    1996-07-01

    In the last few years, modelling techniques have been developed in new supercell computer codes. These techniques have been used to model the CANDU reactivity devices. One technique is based on one- and two-dimensional transport calculations with the WIMS-AECL lattice code followed by super homogenization and three-dimensional flux calculations in a modified version of the MULTICELL code. The second technique is based on two- and three-dimensional transport calculations in DRAGON. The code calculates the lattice properties by solving the transport equation in a two-dimensional geometry followed by supercell calculations in three dimensions. These two calculation schemes have been used to calculate the incremental macroscopic properties of CANDU reactivity devices. The supercell size has also been modified to define incremental properties over a larger region. The results show improved agreement between the reactivity worth of zone controllers and adjusters. However, at the same time the agreement between measured and simulated flux distributions deteriorated somewhat. (author)

  19. Ab initio theory and calculations of X-ray spectra

    International Nuclear Information System (INIS)

    Rehr, J.J.; Kas, J.J.; Prange, M.P.; Sorini, A.P.; Takimoto, Y.; Vila, F.

    2009-01-01

    There has been dramatic progress in recent years both in the calculation and interpretation of various x-ray spectroscopies. However, current theoretical calculations often use a number of simplified models to account for many-body effects, in lieu of first principles calculations. In an effort to overcome these limitations we describe in this article a number of recent advances in theory and in theoretical codes which offer the prospect of parameter free calculations that include the dominant many-body effects. These advances are based on ab initio calculations of the dielectric and vibrational response of a system. Calculations of the dielectric function over a broad spectrum yield system dependent self-energies and mean-free paths, as well as intrinsic losses due to multielectron excitations. Calculations of the dynamical matrix yield vibrational damping in terms of multiple-scattering Debye-Waller factors. Our ab initio methods for determining these many-body effects have led to new, improved, and broadly applicable x-ray and electron spectroscopy codes. (authors)

  20. An improved fast multipole method for electrostatic potential calculations in a class of coarse-grained molecular simulations

    International Nuclear Information System (INIS)

    Poursina, Mohammad; Anderson, Kurt S.

    2014-01-01

    This paper presents a novel algorithm to approximate the long-range electrostatic potential field in the Cartesian coordinates applicable to 3D coarse-grained simulations of biopolymers. In such models, coarse-grained clusters are formed via treating groups of atoms as rigid and/or flexible bodies connected together via kinematic joints. Therefore, multibody dynamic techniques are used to form and solve the equations of motion of such coarse-grained systems. In this article, the approximations for the potential fields due to the interaction between a highly negatively/positively charged pseudo-atom and charged particles, as well as the interaction between clusters of charged particles, are presented. These approximations are expressed in terms of physical and geometrical properties of the bodies such as the entire charge, the location of the center of charge, and the pseudo-inertia tensor about the center of charge of the clusters. Further, a novel substructuring scheme is introduced to implement the presented far-field potential evaluations in a binary tree framework as opposed to the existing quadtree and octree strategies of implementing fast multipole method. Using the presented Lagrangian grids, the electrostatic potential is recursively calculated via sweeping two passes: assembly and disassembly. In the assembly pass, adjacent charged bodies are combined together to form new clusters. Then, the potential field of each cluster due to its interaction with faraway resulting clusters is recursively calculated in the disassembly pass. The method is highly compatible with multibody dynamic schemes to model coarse-grained biopolymers. Since the proposed method takes advantage of constant physical and geometrical properties of rigid clusters, improvement in the overall computational cost is observed comparing to the tradition application of fast multipole method

  1. An improved fast multipole method for electrostatic potential calculations in a class of coarse-grained molecular simulations

    Science.gov (United States)

    Poursina, Mohammad; Anderson, Kurt S.

    2014-08-01

    This paper presents a novel algorithm to approximate the long-range electrostatic potential field in the Cartesian coordinates applicable to 3D coarse-grained simulations of biopolymers. In such models, coarse-grained clusters are formed via treating groups of atoms as rigid and/or flexible bodies connected together via kinematic joints. Therefore, multibody dynamic techniques are used to form and solve the equations of motion of such coarse-grained systems. In this article, the approximations for the potential fields due to the interaction between a highly negatively/positively charged pseudo-atom and charged particles, as well as the interaction between clusters of charged particles, are presented. These approximations are expressed in terms of physical and geometrical properties of the bodies such as the entire charge, the location of the center of charge, and the pseudo-inertia tensor about the center of charge of the clusters. Further, a novel substructuring scheme is introduced to implement the presented far-field potential evaluations in a binary tree framework as opposed to the existing quadtree and octree strategies of implementing fast multipole method. Using the presented Lagrangian grids, the electrostatic potential is recursively calculated via sweeping two passes: assembly and disassembly. In the assembly pass, adjacent charged bodies are combined together to form new clusters. Then, the potential field of each cluster due to its interaction with faraway resulting clusters is recursively calculated in the disassembly pass. The method is highly compatible with multibody dynamic schemes to model coarse-grained biopolymers. Since the proposed method takes advantage of constant physical and geometrical properties of rigid clusters, improvement in the overall computational cost is observed comparing to the tradition application of fast multipole method.

  2. Comparison of self-consistent calculations of the static polarizability of atoms and molecules

    International Nuclear Information System (INIS)

    Moullet, I.; Martins, J.L.

    1990-01-01

    The static dipole polarizabilities and other ground-state properties of H, H 2 , He, Na, and Na 2 are calculated using five different self-consistent schemes: Hartree--Fock, local spin density approximation, Hartree--Fock plus local density correlation, self-interaction-corrected local spin density approximation, and Hartree--Fock plus self-interaction-corrected local density correlation. The inclusion of the self-interaction corrected local spin density approximation in the Hartree--Fock method improves dramatically the calculated dissociation energies of molecules but has a small effect on the calculated polarizabilities. Correcting the local spin density calculations for self-interaction effects improves the calculated polarizability in the cases where the local spin density results are mediocre, and has only a small effect in the cases where the local spin density values are in reasonable agreement with experiment

  3. The resonance self-shielding calculation with regularized random ladders

    International Nuclear Information System (INIS)

    Ribon, P.

    1986-01-01

    The straightforward method for calculation of resonance self-shielding is to generate one or several resonance ladders, and to process them as resolved resonances. The main drawback of Monte Carlo methods used to generate the ladders, is the difficulty of reducing the dispersion of data and results. Several methods are examined, and it is shown how one (a regularized sampling method) improves the accuracy. Analytical methods to compute the effective cross-section have recently appeared: they are basically exempt from dispersion, but are inevitably approximate. The accuracy of the most sophisticated one is checked. There is a neutron energy range which is improperly considered as statistical. An examination is presented of what happens when it is treated as statistical, and how it is possible to improve the accuracy of calculations in this range. To illustrate the results calculations have been performed in a simple case: nucleus 238 U, at 300 K, between 4250 and 4750 eV. (author)

  4. Increasing nursing students' understanding and accuracy with medical dose calculations: A collaborative approach.

    Science.gov (United States)

    Mackie, Jane E; Bruce, Catherine D

    2016-05-01

    Accurate calculation of medication dosages can be challenging for nursing students. Specific interventions related to types of errors made by nursing students may improve the learning of this important skill. The objective of this study was to determine areas of challenge for students in performing medication dosage calculations in order to design interventions to improve this skill. Strengths and weaknesses in the teaching and learning of medication dosage calculations were assessed. These data were used to create online interventions which were then measured for the impact on student ability to perform medication dosage calculations. The setting of the study is one university in Canada. The qualitative research participants were 8 nursing students from years 1-3 and 8 faculty members. Quantitative results are based on test data from the same second year clinical course during the academic years 2012 and 2013. Students and faculty participated in one-to-one interviews; responses were recorded and coded for themes. Tests were implemented and scored, then data were assessed to classify the types and number of errors. Students identified conceptual understanding deficits, anxiety, low self-efficacy, and numeracy skills as primary challenges in medication dosage calculations. Faculty identified long division as a particular content challenge, and a lack of online resources for students to practice calculations. Lessons and online resources designed as an intervention to target mathematical and concepts and skills led to improved results and increases in overall pass rates for second year students for medication dosage calculation tests. This study suggests that with concerted effort and a multi-modal approach to supporting nursing students, their abilities to calculate dosages can be improved. The positive results in this study also point to the promise of cross-discipline collaborations between nursing and education. Copyright © 2016 Elsevier Ltd. All rights

  5. Human Trafficking and National Morality

    Directory of Open Access Journals (Sweden)

    William R. DI PIETRO

    2015-12-01

    Full Text Available The paper proposes that national morality is an important variable for explaining national anti-trafficking policy. It uses cross country regression analysis to see whether or not empirically national morality is a determinant of anti-trafficking policy. The findings of the paper are consistent with the notion that improved levels of national morality lead to better national anti-trafficking policy. National morality is found to be statistically relevant for national anti-trafficking policy when controlling for the extent of democracy, the share of the private sector in the economy, and the degree of globalization.

  6. Improved tissue assignment using dual-energy computed tomography in low-dose rate prostate brachytherapy for Monte Carlo dose calculation

    Energy Technology Data Exchange (ETDEWEB)

    Côté, Nicolas [Département de Physique, Université de Montréal, Pavillon Roger-Gaudry (D-428), 2900 Boulevard Édouard-Montpetit, Montréal, Québec H3T 1J4 (Canada); Bedwani, Stéphane [Département de Radio-Oncologie, Centre Hospitalier de l’Université de Montréal (CHUM), 1560 Rue Sherbrooke Est, Montréal, Québec H2L 4M1 (Canada); Carrier, Jean-François, E-mail: jean-francois.carrier.chum@ssss.gouv.qc.ca [Département de Physique, Université de Montréal, Pavillon Roger-Gaudry (D-428), 2900 Boulevard Édouard-Montpetit, Montréal, Québec H3T 1J4, Canada and Département de Radio-Oncologie, Centre Hospitalier de l’Université de Montréal (CHUM), 1560 Rue Sherbrooke Est, Montréal, Québec H2L 4M1 (Canada)

    2016-05-15

    Purpose: An improvement in tissue assignment for low-dose rate brachytherapy (LDRB) patients using more accurate Monte Carlo (MC) dose calculation was accomplished with a metallic artifact reduction (MAR) method specific to dual-energy computed tomography (DECT). Methods: The proposed MAR algorithm followed a four-step procedure. The first step involved applying a weighted blend of both DECT scans (I {sub H/L}) to generate a new image (I {sub Mix}). This action minimized Hounsfield unit (HU) variations surrounding the brachytherapy seeds. In the second step, the mean HU of the prostate in I {sub Mix} was calculated and shifted toward the mean HU of the two original DECT images (I {sub H/L}). The third step involved smoothing the newly shifted I {sub Mix} and the two original I {sub H/L}, followed by a subtraction of both, generating an image that represented the metallic artifact (I {sub A,(H/L)}) of reduced noise levels. The final step consisted of subtracting the original I {sub H/L} from the newly generated I {sub A,(H/L)} and obtaining a final image corrected for metallic artifacts. Following the completion of the algorithm, a DECT stoichiometric method was used to extract the relative electronic density (ρ{sub e}) and effective atomic number (Z {sub eff}) at each voxel of the corrected scans. Tissue assignment could then be determined with these two newly acquired physical parameters. Each voxel was assigned the tissue bearing the closest resemblance in terms of ρ{sub e} and Z {sub eff}, comparing with values from the ICRU 42 database. A MC study was then performed to compare the dosimetric impacts of alternative MAR algorithms. Results: An improvement in tissue assignment was observed with the DECT MAR algorithm, compared to the single-energy computed tomography (SECT) approach. In a phantom study, tissue misassignment was found to reach 0.05% of voxels using the DECT approach, compared with 0.40% using the SECT method. Comparison of the DECT and SECT D

  7. Off-gas treatment carbon footprint calculator : form and function

    Energy Technology Data Exchange (ETDEWEB)

    Kessell, L. [Good EarthKeeping Organization Inc., Corona, CA (United States); Squire, J.; Crosby, K. [Haley and Aldrich Inc., Boston, MA (United States)

    2008-07-01

    Carbon footprinting is the measurement of the impact on the environment in terms of the amount of greenhouse gases produced, measured in units of carbon dioxide released directly and indirectly by an individual, organization, process, event or product. This presentation discussed an off-gas treatment carbon footprint calculator. The presentation provided a review of off-gas treatment technologies and presented a carbon footprint model. The model included: form and function; parameters; assumptions; calculations; and off-gas treatment applications. Parameters of the model included greenhouse gases listed in the Kyoto Protocol to the United Nations Framework Convention on Climate Change, such as carbon dioxide, methane, nitrous oxide, sulfur hexafluoride, hydrofluorocarbons, and perfluorocarbons. Assumptions of the model included stationary combustion emissions; mobile combustion emissions; indirect emissions; physical or chemical processing emissions; fugitive emissions; and de minimus emissions. The presentation also examined resource conservation and discussed three greenhouse gas footprint case studies. It was concluded that the model involved a calculator with standard calculations with clearly defined assumptions with boundaries. tabs., figs.

  8. JULIA: calculation projection software for primary barriers shielding to X-Rays using barite; JULIA: software de projeção de cálculos para blindagem de barreiras primárias à raios-X usando barita

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Júlia R.A.S. da; Vieira, José W., E-mail: j.rafaela14@gmail.com, E-mail: jose.wilson@recife.ifpe.edu.br [Instituto Federal de Educação, Ciência e Tecnologia de Pernambuco (IFPE), Recife - PE (Brazil); Lima, Fernando R. A., E-mail: falima@cnen.gov.br [Centro Regional de Ciências Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2017-07-01

    The objective was to program a software to calculate the required thicknesses to attenuate X-rays in kilovoltage of 60 kV, 80 kV, 110 kV and 150 kV. The conventional methodological parameters for structural shield calculations established by the NCRP (National Council on Radiation Protection and Measurements) were presented. The descriptive and exploratory methods allowed the construction of the JULIA. In this sense and based on the result obtained, the tool presented is useful for professionals who wish to design structural shielding in radiodiagnostic and/or therapy. The development of calculations in the computational tool corresponds to the accessibility, optimization of time and estimation close to the real. Such heuristic exercise represents improvement of calculations for the estimation of primary barriers with barite.

  9. Testing of the analytical anisotropic algorithm for photon dose calculation

    International Nuclear Information System (INIS)

    Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka; Tenhunen, Mikko; Helminen, Hannu; Siljamaeki, Sami; Alakuijala, Jyrki; Paiusco, Marta; Iori, Mauro; Huyskens, Dominique P.

    2006-01-01

    The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimization algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d max . The electron contamination model was found to be suboptimal to model the dose around d max , especially for physical

  10. DIAGNOSING NATIONAL AND ORGANIZATIONAL CULTURE DIFFERENCES: A RESEARCH IN HOTEL ENTERPRISES

    OpenAIRE

    AKDENİZ, Defne; AYTEMİZ SEYMEN, Oya

    2013-01-01

    This study aimed to test whether national culture and organizational cultures were isomorphic in accommodation establishments, through Hofstede’s cultural dimensions. Based on data from a survey of 142 employees from multinational hotels in Istanbul, the existence and degree of difference between national and organizational culture were tested. The new culture scores were calculated by calculation formulas derived from the mean scores of each culture dimension. The most important result of th...

  11. Three dimensions transport calculations for PWR core; Calcul de coeur R.E.P. en transport 3D

    Energy Technology Data Exchange (ETDEWEB)

    Richebois, E

    2000-07-01

    The objective of this work is to define improved 3-D core calculation methods based on the transport theory. These methods can be particularly useful and lead to more precise computations in areas of the core where anisotropy and steep flux gradients occur, especially near interface and boundary conditions and in regions of high heterogeneity (bundle with absorbent rods). In order to apply the transport theory a new method for calculating reflector constants has been developed, since traditional methods were only suited for 2-group diffusion core calculations and could not be extrapolated to transport calculations. In this thesis work, the new method for obtaining reflector constants is derived regardless of the number of energy groups and of the operator used. The core calculations results using the reflector constants thereof obtained have been validated on the EDF's power reactor Saint Laurent B1 with MOX loading. The advantages of a 3-D core transport calculation scheme have been highlighted as opposed to diffusion methods; there are a considerable number of significant effects and potential advantages to be gained in rod worth calculations for instance. These preliminary results obtained with on particular cycle will have to be confirmed by more systematic analysis. Accidents like MSLB (main steam line break) and LOCA (loss of coolant accident) should also be investigated and constitute challenging situations where anisotropy is high and/or flux gradients are steep. This method is now being validated for others EDF's PWRs' reactors, as well as for experimental reactors and other types of commercial reactors. (author)

  12. Three dimensions transport calculations for PWR core; Calcul de coeur R.E.P. en transport 3D

    Energy Technology Data Exchange (ETDEWEB)

    Richebois, E

    2000-07-01

    The objective of this work is to define improved 3-D core calculation methods based on the transport theory. These methods can be particularly useful and lead to more precise computations in areas of the core where anisotropy and steep flux gradients occur, especially near interface and boundary conditions and in regions of high heterogeneity (bundle with absorbent rods). In order to apply the transport theory a new method for calculating reflector constants has been developed, since traditional methods were only suited for 2-group diffusion core calculations and could not be extrapolated to transport calculations. In this thesis work, the new method for obtaining reflector constants is derived regardless of the number of energy groups and of the operator used. The core calculations results using the reflector constants thereof obtained have been validated on the EDF's power reactor Saint Laurent B1 with MOX loading. The advantages of a 3-D core transport calculation scheme have been highlighted as opposed to diffusion methods; there are a considerable number of significant effects and potential advantages to be gained in rod worth calculations for instance. These preliminary results obtained with on particular cycle will have to be confirmed by more systematic analysis. Accidents like MSLB (main steam line break) and LOCA (loss of coolant accident) should also be investigated and constitute challenging situations where anisotropy is high and/or flux gradients are steep. This method is now being validated for others EDF's PWRs' reactors, as well as for experimental reactors and other types of commercial reactors. (author)

  13. National seminar on tree improvement, January 8, 1981

    Energy Technology Data Exchange (ETDEWEB)

    1981-01-01

    Twenty one papers are presented from this seminar held at Kumarapumal Farm Science Centre, Tiruchira. An introductory paper gives a resume of tree improvement work in Tamil Nadu University and this is followed by papers on improvement of eucalypts (11), Casuarina, teak (3) and other species (3 papers on sandal, cashew and gamma irradiation of amla (Emblica officinalis seeds)).

  14. Realistic methods for calculating the releases and consequences of a large LOCA

    International Nuclear Information System (INIS)

    Stephenson, W.; Dutton, L.M.C.; Handy, B.J.; Smedley, C.

    1992-01-01

    This report describes a calculational route to predict realistic radiological consequences for a successfully terminated large-loss-of-coolant accident (LOCA) at a pressurized-water reactor (PWR). All steps in the calculational route are considered. For each one, a brief comment is made on the significant differences between the methods of calculation that were identified in the benchmark studies and recommendations are made for the methods and data for carrying out realistic calculations. These are based on the best supportable methods and data and the technical basis for each recommendation is given. Where the lack of well-validated methods or data means that the most realistic method that can be justified is considered to be very conservative, the need for further research is identified. The behaviour of inorganic iodine and the removal of aerosols from the atmosphere of the reactor building are identified as areas of particular importance. Where the retention of radioactivity is sensitive to design features, these are identified and, for the most importance features, the impact of different designs on the release of activity is indicated. The predictions of the proposed model are calculated for each stage and compared with the releases of activity predicted by the licensing methods that were used in the earlier benchmark studies. The conservative nature of the latter is confirmed. Methods and data are also presented for calculating the resulting doses to members of the public of the National Radiological Protection Boards as a result of work carried out by several national bodies in the UK. Other, equally acceptable, models are used in other countries of the Community and some examples are given

  15. Four years of experience with the use of calculated isotopic correlations in establishing input balances at the La Hague plant

    International Nuclear Information System (INIS)

    Aries, M.; Patigny, P.; Bouchard, J.; Giacometti, A.; Girieud, R.

    1983-01-01

    For more than four years the La Hague reprocessing plant has been using calculated isotopic correlations to establish and check its input balances. The masses of uranium and plutonium entering the plant are determined by the gravimetric balance method, which utilizes the burnup obtained by calculated isotopic correlation as well as the Pu/U ratio measured at the dissolver after cross-checking with the values obtained by correlation. Further, a verification of all the parameters needed to establish these balances - whether physical or chemical in origin - is carried out systematically by means of internal coherence constants which make it possible to detect any anomalies in the dissolution data. The calculated isotopic correlations were evaluated when the analyses of numerous representative samples of irradiated fuel and experimental results of separated isotopic irradiation in water reactor spectra had been interpreted. The accuracy achieved was improved by allowing in the neutron calculations for effects inherent in the first reactor core and by selecting a set of calculation functions which attenuates (by compensation effects) the various perturbations in the irradiation history. The results obtained at La Hague with calculated isotopic correlations on nearly 600 t of reprocessed UO 2 , because of their large number and above all their high quality, suggest that it be proposed extending the method to other reprocessing plants. This could be done by the operator himself or by national or international control bodies within the framework of a safeguards arrangement. (author)

  16. Source term calculations - Ringhals 2 PWR

    International Nuclear Information System (INIS)

    Johansson, L.L.

    1998-02-01

    This project was performed within the fifth and final phase of sub-project RAK-2.1 of the Nordic Co-operative Reactor Safety Program, NKS.RAK-2.1 has also included studies of reflooding of degraded core, recriticality and late phase melt progression. Earlier source term calculations for Swedish nuclear power plants are based on the integral code MAAP. A need was recognised to compare these calculations with calculations done with mechanistic codes. In the present work SCDAP/RELAP5 and CONTAIN were used. Only limited results could be obtained within the frame of RAK-2.1, since many problems were encountered using the SCDAP/RELAP5 code. The main obstacle was the extremely long execution times of the MOD3.1 version, but also some dubious fission product calculations. However, some interesting results were obtained for the studied sequence, a total loss of AC power. The report describes the modelling approach for SCDAP/RELAP5 and CONTAIN, and discusses results for the transient including the event of a surge line creep rupture. The study will probably be completed later, providing that an improved SCDAP/RELAP5 code version becomes available. (au) becomes available. (au)

  17. National emissions from tourism: An overlooked policy challenge?

    International Nuclear Information System (INIS)

    Gössling, Stefan

    2013-01-01

    Tourism has been recognized as a significant greenhouse gas (GHG) emissions sector on a global scale. Yet, only few studies assess tourism's share in national emissions. This paper compares and analyses existing inventories of national emissions from tourism. Studies are difficult to compare, because they use different system boundaries and allocation principles, omitting or including lifecycle emissions and GHG other than CO 2 . By outlining and analysing these differences, the paper estimates the contribution made by tourism to national emissions, and its greenhouse gas intensity in comparison to other economic sectors. Results indicate that while emissions from tourism are significant in all countries studied, they may, in some countries, exceed ‘official' emissions as calculated on the basis of guidelines for national emission inventories under the Kyoto Protocol. This is a result of the fact that bunker fuels are not considered in national GHG inventories, leading to underestimates of the energy- and GHG intensity of tourism economies. While further growth in tourism emissions can be expected in all countries studied, energy-related vulnerabilities are already considerable in many of these. Climate policy for tourism, on the other hand, is largely non-existent, calling for immediate action to consider this sector in national legislation. - Highlights: • Emissions from tourism are equivalent to 5–150% of ’official’ national emissions. • Inconsistent methods are used to calculate national tourism emissions. • Tourism is an energy-intense economic sector compared to other sectors. • Emissions from tourism are growing rapidly. • National policy is not concerned with tourism-related emissions

  18. TTS-Polttopuu - cost calculation model for fuelwood

    International Nuclear Information System (INIS)

    Naett, H.; Ryynaenen, S.

    1998-01-01

    The TTS-Institutes's Forestry Department has developed a computer based costcalculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486-level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY Research Programme. (orig.)

  19. Typical calculation and analysis of carbon emissions in thermal power plants

    Science.gov (United States)

    Gai, Zhi-jie; Zhao, Jian-gang; Zhang, Gang

    2018-03-01

    On December 19, 2017, the national development and reform commission issued the national carbon emissions trading market construction plan (power generation industry), which officially launched the construction process of the carbon emissions trading market. The plan promotes a phased advance in carbon market construction, taking the power industry with a large carbon footprint as a breakthrough, so it is extremely urgent for power generation plants to master their carbon emissions. Taking a coal power plant as an example, the paper introduces the calculation process of carbon emissions, and comes to the fuel activity level, fuel emissions factor and carbon emissions data of the power plant. Power plants can master their carbon emissions according to this paper, increase knowledge in the field of carbon reserves, and make the plant be familiar with calculation method based on the power industry carbon emissions data, which can help power plants positioning accurately in the upcoming carbon emissions trading market.

  20. Audit calculation for the LOCA methodology for KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Un Chul; Park, Chang Hwan; Choi, Yong Won; Yoo, Jun Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2006-11-15

    The objective of this research is to perform the audit regulatory calculation for the LOCA methodology for KSNP. For LBLOCA calculation, several uncertainty variables and new ranges of those are added to those of previous KINS-REM to improve the applicability of KINS-REM for KSNP LOCA. And those results are applied to LBLOCA audit calculation by statistical method. For SBLOCA calculation, after selecting BATHSY9.1.b, which is not used by KHNP, the results of RELAP5/Mod3.3 and RELAP5/MOD3.3ef-sEM for KSNP SBLOCA are compared to evaluate the conservativeness or applicability of RELAP5/MOD3.3ef-sEM code for KSNP SBLOCA. The result of this research can be used to support the activities of KINS for reviewing the LOCA methodology for KSNP proposed by KHNP.

  1. Non-Intrusive Computational Method and Uncertainty Quantification Tool for isolator operability calculations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational fluid dynamics (CFD) simulations are extensively used by NASA for hypersonic aerothermodynamics calculations. The physical models used in CFD codes and...

  2. An improved 8 GeV beam transport system for the Fermi National Accelerator Laboratory

    International Nuclear Information System (INIS)

    Syphers, M.J.

    1987-06-01

    A new 8 GeV beam transport system between the Booster and Main Ring synchrotrons at the Fermi National Accelerator Laboratory is presented. The system was developed in an effort to improve the transverse phase space area occupied by the proton beam upon injection into the Main Ring accelerator. Problems with the original system are described and general methods of beamline design are formulated. Errors in the transverse properties of a beamline at the injection point of the second synchrotron and their effects on the region in transverse phase space occupied by a beam of particles are discussed. Results from the commissioning phase of the project are presented as well as measurements of the degree of phase space dilution generated by the transfer of 8 GeV protons from the Booster synchrotron to the Main Ring synchrotron

  3. Matching fully differential NNLO calculations and parton showers

    International Nuclear Information System (INIS)

    Alioli, Simone; Bauer, Christian W.; Berggren, Calvin; Walsh, Jonathan R.; Zuberi, Saba

    2013-11-01

    We present a general method to match fully differential next-to-next-to-leading (NNLO) calculations to parton shower programs. We discuss in detail the perturbative accuracy criteria a complete NNLO+PS matching has to satisfy. Our method is based on consistently improving a given NNLO calculation with the leading-logarithmic (LL) resummation in a chosen jet resolution variable. The resulting NNLO+LL calculation is cast in the form of an event generator for physical events that can be directly interfaced with a parton shower routine, and we give an explicit construction of the input ''Monte Carlo cross sections'' satisfying all required criteria. We also show how other proposed approaches naturally arise as special cases in our method.

  4. Do Forwarders Improve Sustainability Efficiency? Evidence from a European DEA Malmquist Index Calculation

    Directory of Open Access Journals (Sweden)

    Matthias Klumpp

    2017-05-01

    Full Text Available Sustainability performance and efficiency is an important topic in transportation and for forwarders. This is shown, for example, by the fact that major logistics service providers LSP publish sustainability reports, often within the annual legal business report. However, in depth research is missing regarding the efficiency of forwarders regarding the established triple bottom line approach for sustainability, including economic, social, and ecology performance areas. This is especially true for a dynamic time-series perspective, as usually only static analyses for one point in time are presented (in most cases single business years. Therefore, the operations research technique of a data envelopment analysis (DEA Malmquist index calculation is used in order to provide a longitudinal calculation of efficiency, incorporating multiple objectives regarding the triple bottom line approach for European forwarders. Several indicators are tested, including total revenues and assets as input types, profit (EBIT and dividend volume (economic dimension, employment and gender equality in management (social, and carbon-equivalent emissions (environmental as output types.

  5. Development and improvement of four submodels for accident consequence calculations (phase B of DRS). Final report. Pt. 2

    International Nuclear Information System (INIS)

    Jacobi, W.; Paretzke, H.G.; Jacob, P.; Meckbach, R.

    1989-11-01

    To improve the external dose model of the German risk study dose equivalents in 22 organs of anthropomorphical phantoms have been calculated for exposure to radionuclides in the air and on the ground. The angular and energy dependence of the photon fluence, the surface roughness of the ground and the migration of radionuclides in soil have been taken into account. For cloud radiation the organ doses in the new calculations are lower than in phase A, particulary for the red marrow and the bones. For exposures to deposited radionuclides the new results are higher, especially for the lungs and the thyroid (≅ 40%) and the gonads (≅ 60%). Due to the inclusion of the contribution of daughter nuclides the doses from Te-132 and Ba-140 are higher by an order of magnitude. Migration of important radionuclides in soil have been new modelled. The respective reduction of doses in the first 70 years after deposition is smaller by a factor of 1.5. To determine the shielding by houses and urban environments Monte Carlo simulations of the photon transport have been performed. It was found, that for cloud radiation the exposition outside in urban areas, in large buildings and in basements have been over-estimated in Phase A. The shielding of radiation from surface contaminations is different for wet and dry deposition. The relatively high dry deposition on trees can lead to exposures in suburban areas, twice as much as over lawns. Living rooms are in general better shielded than previously assumed. (orig./HP) [de

  6. Development and validation of calculation schemes dedicated to the interpretation of small reactivity effects for nuclear data improvement

    International Nuclear Information System (INIS)

    Gruel, A.

    2011-01-01

    Reactivity measurements by the oscillation technique, as those performed in the Minerve reactor, enable to access various neutronic parameters on materials, fuels or specific isotopes. Usually, expected reactivity effects are small, about ten pcm at maximum. Then, the modeling of these experiments should be very precise, to obtain reliable feedback on the pointed parameters. Especially, calculation biases should be precisely identified, quantified and reduced to get precise information on nuclear data. The goal of this thesis is to develop a reference calculation scheme, with well quantified uncertainties, for in-pile oscillation experiments. In this work are presented several small reactivity calculation methods, based on deterministic and/or stochastic calculation codes. Those method are compared thanks to a numerical benchmark, against a reference calculation. Three applications of these methods are presented here: a purely deterministic calculation with exact perturbation theory formalism is used for the experimental validation of fission product cross sections, in the frame of reactivity loss studies for irradiated fuel; an hybrid method, based on a stochastic calculation and the exact perturbation theory is used for the readjustment of nuclear data, here 241 Am; and a third method, based on a perturbative Monte Carlo calculation, is used in a conception study. (author) [fr

  7. Metrological and treatment planning improvements on external beam radiotherapy. Detector size effect and dose calculation in low-density media (in Spanish)

    International Nuclear Information System (INIS)

    Garcia-Vicente, Feliciano

    2004-01-01

    The objective of this thesis is the improvement of the measurement and calculation accuracy for radiation therapy fields. Basically, it deals with two questions: the detector size effect and the heterogeneity dose calculation. The author analyzes both the metrological and computational effects and its clinical implications by simulation of the radiotherapy treatments in a treatment planning system. The detector size effect leads up to smoothing of the radiation profile increasing the penumbra (20%-80%) and beam fringe (50%-90%) values with the consequent clinical effect of over-irradiation of the organs at risk close to the planning target volume (PTV). In this thesis this problem is analyzed finding mathematical solutions based on profile deconvolution or the use of radiation detectors of adequate size. On the other side, the author analyzes the dose computation on heterogeneous media by the superposition algorithms versus classical algorithms. The derived conclusion from this thesis is that in locations like lung and breast, the classical algorithms lead to a significant underdosage of the PTV with an important decrease of tumor control probability (TCP). On this basis, the author does not recommend the clinical use of these algorithms in the mentioned tumor locations

  8. Improvement of the skeleton tables for calculation of the critical heat load

    International Nuclear Information System (INIS)

    Gotovskij, M.A.; Kvetnyj, M.A.

    2002-01-01

    Paper presents analysis of drawbacks of the skeleton tables of the critical heat flows applied in calculated heat and hydraulic codes. Paper demonstrates the necessity to take account of specific nature of mechanisms of dryout crisis, of boiling crisis at slow mass rates and the range of small underheatings up to temperature of saturation. Attention is drawn to necessity of detailed account of the natural limitations of the application field of the skeleton tables [ru

  9. Validation of Dose Calculation Codes for Clearance

    International Nuclear Information System (INIS)

    Menon, S.; Wirendal, B.; Bjerler, J.; Studsvik; Teunckens, L.

    2003-01-01

    Various international and national bodies such as the International Atomic Energy Agency, the European Commission, the US Nuclear Regulatory Commission have put forward proposals or guidance documents to regulate the ''clearance'' from regulatory control of very low level radioactive material, in order to allow its recycling as a material management practice. All these proposals are based on predicted scenarios for subsequent utilization of the released materials. The calculation models used in these scenarios tend to utilize conservative data regarding exposure times and dose uptake as well as other assumptions as a safeguard against uncertainties. None of these models has ever been validated by comparison with the actual real life practice of recycling. An international project was organized in order to validate some of the assumptions made in these calculation models, and, thereby, better assess the radiological consequences of recycling on a practical large scale

  10. Optimization of the Spent Fuel Attribute Tester using radiation transport calculations

    International Nuclear Information System (INIS)

    Laub, T.W.; Dupree, S.A.; Arlt, R.

    1993-01-01

    The International Atomic Energy Agency uses the Spent Fuel Attribute Tester (SFAT) to measure gamma signatures from fuel assemblies stored in spent fuel pools. It consists of a shielded, collimated NaI(Tl) detector attached to an air-filled pipe. The purpose of the present study was to define design changes, within operational constraints, that would improve the target assembly 137 Cs signal relative to the background signals from adjacent assemblies. This improvement is essential to reducing to an acceptable level the measurement time during an inspection. Monte Carlo calculations of the entire geometry were impractical, therefore, a hybrid method was developed that combined one-dimensional discrete ordinates models of the spent fuel pool, three-dimensional Monte Carlo calculations of the SFAT, and detector response calculations. The method compared well with measurements taken with the existing baseline SFAT. Calculations predicted significant improvements in signal-to-noise ratio. Recommended changes included shortening the pipe and increasing its wall thickness, placing low-Z filters in the crystal line of sight, reducing the thickness of shielding around the collimator aperture and adding shielding around the crystal, and reducing the diameter of the crystal. An instrument incorporating these design changes is being fabricated in Finland and will be tested this year

  11. Methods for Melting Temperature Calculation

    Science.gov (United States)

    Hong, Qi-Jun

    Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which

  12. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  13. Monte Carlo neutron and gamma-ray calculations

    International Nuclear Information System (INIS)

    Mendelsohn, Edgar

    1987-01-01

    Kerma in tissue and the activation produced in sulfur and cobalt due to prompt neutrons from the Hiroshima and Nagasaki bombs were calculated out to 2000 m from the hypocenter in 100 m increments. As neutron sources weapon output spectra calculated by investigators from the Los Alamos National Laboratory (LANL) were used. Other parameters, such as burst height and air and ground densities and compositions, were obtained from recent sources. The LLNL Monte Carlo transport code TART was used for these calculations. TART accesses the well-established 1985 ENDL cross-section library, which has built-in reaction cross sections. The zoning for this problem was a full two-dimensional geometry with a ceiling height of 1100 m and a ground thickness of 30 cm. For the Hiroshima calculations (including sulfur activation) and untilted source was used. However, a special sulfur activation problem using a source tilted 15 deg was run for which the ratios to the untilted case are reported. The TART code uses a technique for solving the transport equation that is different from that of the ORNL DOT code; it also draws on a specially evaluated cross-section library (ENDL) and uses a larger group structure than DOT. One of the purposes of this work was to instill confidence in the DOT calculations that will be used directly in the dose reassessment of A-bomb survivors. The TART results were compared with values calculated with the DOT code by investigators from ORNL and found to be in good agreement for the most part. However, the sulfur activation comparison is disappointing. Because the sulfur activation is caused by higher energy neutrons (which should have experienced fewer collisions than those causing cobalt activation, for example), better agreement than what is reported here would be expected

  14. The national atlas as a metaphor for improved use of a national geospatial data infrastructure

    NARCIS (Netherlands)

    Aditya Kurniawan Muhammad, T.

    2007-01-01

    Geospatial Data infrastructures have been developed worldwide. Geoportals have been created as an interface to allow users or the community to discover and use geospatial data offered by providers of these initiatives. This study focuses on the development of a web national atlas as an alternative

  15. Improving short-term air quality predictions over the U.S. using chemical data assimilation

    Science.gov (United States)

    Kumar, R.; Delle Monache, L.; Alessandrini, S.; Saide, P.; Lin, H. C.; Liu, Z.; Pfister, G.; Edwards, D. P.; Baker, B.; Tang, Y.; Lee, P.; Djalalova, I.; Wilczak, J. M.

    2017-12-01

    State and local air quality forecasters across the United States use air quality forecasts from the National Air Quality Forecasting Capability (NAQFC) at the National Oceanic and Atmospheric Administration (NOAA) as one of the key tools to protect the public from adverse air pollution related health effects by dispensing timely information about air pollution episodes. This project funded by the National Aeronautics and Space Administration (NASA) aims to enhance the decision-making process by improving the accuracy of NAQFC short-term predictions of ground-level particulate matter of less than 2.5 µm in diameter (PM2.5) by exploiting NASA Earth Science Data with chemical data assimilation. The NAQFC is based on the Community Multiscale Air Quality (CMAQ) model. To improve the initialization of PM2.5 in CMAQ, we developed a new capability in the community Gridpoint Statistical Interpolation (GSI) system to assimilate Terra/Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth (AOD) retrievals in CMAQ. Specifically, we developed new capabilities within GSI to read/write CMAQ data, a forward operator that calculates AOD at 550 nm from CMAQ aerosol chemical composition and an adjoint of the forward operator that translates the changes in AOD to aerosol chemical composition. A generalized background error covariance program called "GEN_BE" has been extended to calculate background error covariance using CMAQ output. The background error variances are generated using a combination of both emissions and meteorological perturbations to better capture sources of uncertainties in PM2.5 simulations. The newly developed CMAQ-GSI system is used to perform daily 24-h PM2.5 forecasts with and without data assimilation from 15 July to 14 August 2014, and the resulting forecasts are compared against AirNOW PM2.5 measurements at 550 stations across the U. S. We find that the assimilation of MODIS AOD retrievals improves initialization of the CMAQ model

  16. Burnup calculation methodology in the serpent 2 Monte Carlo code

    International Nuclear Information System (INIS)

    Leppaenen, J.; Isotalo, A.

    2012-01-01

    This paper presents two topics related to the burnup calculation capabilities in the Serpent 2 Monte Carlo code: advanced time-integration methods and improved memory management, accomplished by the use of different optimization modes. The development of the introduced methods is an important part of re-writing the Serpent source code, carried out for the purpose of extending the burnup calculation capabilities from 2D assembly-level calculations to large 3D reactor-scale problems. The progress is demonstrated by repeating a PWR test case, originally carried out in 2009 for the validation of the newly-implemented burnup calculation routines in Serpent 1. (authors)

  17. Historical Improvement in Speed Skating Economy.

    Science.gov (United States)

    Noordhof, Dionne A; van Tok, Elmy; Joosten, Florentine S J G M; Hettinga, Florentina J; Hoozemans, Marco J M; Foster, Carl; de Koning, Jos J

    2017-02-01

    Half the improvement in 1500-m speed-skating world records can be explained by technological innovations and the other half by athletic improvement. It is hypothesized that improved skating economy is accountable for much of the athletic improvement. To determine skating economy in contemporary athletes and to evaluate the change in economy over the years. Contemporary skaters of the Dutch national junior team (n = 8) skated 3 bouts of 6 laps at submaximal velocity, from which skating economy was calculated (in mL O 2 ・ kg -1 ・ km -1 ). A literature search provided historic data on skating velocity and submaximal V̇O 2 (in mL ・ kg -1 ・ min -1 ), from which skating economy was determined. The association between year and skating economy was determined using linear-regression analysis. Correcting the change in economy for technological innovations resulted in an estimate of the association between year and economy due to athletic improvement. A mean (± SD) skating economy of 73.4 ± 6.4 mL O 2 ・ kg -1 ・ km -1 was found in contemporary athletes. Skating economy improved significantly over the historical time frame (-0.57 mL O 2 ・ kg -1 ・ km -1 ・ y -1 , 95% confidence interval [-0.84, -0.31]). In the final regression model for the klapskate era, with altitude as confounder, skating economy improved with a nonsignificant -0.58 mL O 2 ・ kg -1 ・ km -1 ・ y -1 ([-1.19, 0.035]). Skating economy was 73.4 ± 6.4 mL O 2 ・ kg -1 ・ km -1 in contemporary athletes and improved over the past ~50 y. The association between year and skating economy due to athletic improvement, for the klapskate era, approached significance, suggesting a possible improvement in economy over these years.

  18. [OCCUPATIONAL HEALTH RISK ASSESSMENT AND MANAGEMENT IN WORKERS IN IMPROVEMENT OF NATIONAL POLICY IN OCCUPATIONAL HYGIENE AND SAFETY].

    Science.gov (United States)

    Shur, P Z; Zaĭtseva, N V; Alekseev, V B; Shliapnikov, D M

    2015-01-01

    In accordance with the international documents in the field of occupational safety and hygiene, the assessment and minimization of occupational risks is a key instrument for the health maintenance of workers. One of the main ways to achieve it is the minimization of occupational risks. Correspondingly, the instrument for the implementation of this method is the methodology of analysis of occupational risks. In Russian Federation there were the preconditions for the formation of the system for the assessment and management of occupational risks. As the target of the national (state) policy in the field of occupational safety in accordance with ILO Conventions it can be offered the prevention of accidents and injuries to health arising from work or related with it, minimizing the causes of hazards inherent in the working environment, as far as it is reasonably and practically feasible. Global trend ofusing the methodology of the assessment and management of occupational risks to life and health of citizens requires the improvement of national policies in the field of occupational hygiene and safety. Achieving an acceptable level of occupational risk in the formation of national policy in the field of occupational hygiene and safety can be considered as one of the main tasks.

  19. An improved water budget for the El Yunque National Forest, Puerto Rico, as determined by the Water Supply Stress Index Model

    Science.gov (United States)

    Liangxia Zhang; Ge Sun; Erika Cohen; Steven McNulty; Peter Caldwell; Suzanne Krieger; Jason Christian; Decheng Zhou; Kai Duan; Keren J. Cepero-Pérez

    2018-01-01

    Quantifying the forest water budget is fundamental to making science-based forest management decisions. This study aimed at developing an improved water budget for the El Yunque National Forest (ENF) in Puerto Rico, one of the wettest forests in the United States. We modified an existing monthly scale water balance model, Water Supply Stress Index (WaSSI), to reflect...

  20. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  1. Intercomparison and closure calculations using measurements of aerosol species and optical properties during the Yosemite Aerosol Characterization Study

    Science.gov (United States)

    Malm, William C.; Day, Derek E.; Carrico, Christian; Kreidenweis, Sonia M.; Collett, Jeffrey L.; McMeeking, Gavin; Lee, Taehyoung; Carrillo, Jacqueline; Schichtel, Bret

    2005-07-01

    Physical and optical properties of inorganic aerosols have been extensively studied, but less is known about carbonaceous aerosols, especially as they relate to the non-urban settings such as our nation's national parks and wilderness areas. Therefore an aerosol characterization study was conceived and implemented at one national park that is highly impacted by carbonaceous aerosols, Yosemite. The primary objective of the study was to characterize the physical, chemical, and optical properties of a carbon-dominated aerosol, including the ratio of total organic matter weight to organic carbon, organic mass scattering efficiencies, and the hygroscopic characteristics of a carbon-laden ambient aerosol, while a secondary objective was to evaluate a variety of semi-continuous monitoring systems. Inorganic ions were characterized using 24-hour samples that were collected using the URG and Interagency Monitoring of Protected Visual Environments (IMPROVE) monitoring systems, the micro-orifice uniform deposit impactor (MOUDI) cascade impactor, as well as the semi-continuous particle-into-liquid sampler (PILS) technology. Likewise, carbonaceous material was collected over 24-hour periods using IMPROVE technology along with the thermal optical reflectance (TOR) analysis, while semi-continuous total carbon concentrations were measured using the Rupprecht and Patashnick (R&P) instrument. Dry aerosol number size distributions were measured using a differential mobility analyzer (DMA) and optical particle counter, scattering coefficients at near-ambient conditions were measured with nephelometers fitted with PM10 and PM2.5 inlets, and "dry" PM2.5 scattering was measured after passing ambient air through Perma Pure Nafion® dryers. In general, the 24-hour "bulk" measurements of various aerosol species compared more favorably with each other than with the semi-continuous data. Semi-continuous sulfate measurements correlated well with the 24-hour measurements, but were biased low by

  2. Impact of a required pharmaceutical calculations course on mathematics ability and knowledge retention.

    Science.gov (United States)

    Hegener, Michael A; Buring, Shauna M; Papas, Elizabeth

    2013-08-12

    To assess doctor of pharmacy (PharmD) students' mathematics ability by content area before and after completing a required pharmaceutical calculations course and to analyze changes in scores. A mathematics skills assessment was administered to 2 cohorts of pharmacy students (class of 2013 and 2014) before and after completing a pharmaceutical calculations course. The posttest was administered to the second cohort 6 months after completing the course to assess knowledge retention. Both cohorts performed significantly better on the posttest (cohort 1, 13% higher scores; cohort 2, 15.9% higher scores). Significant improvement on posttest scores was observed in 6 of the 10 content areas for cohorts 1 and 2. Both cohorts scored lower in percentage calculations on the posttest than on the pretest. A required, 1-credit-hour pharmaceutical calculations course improved PharmD students' overall ability to perform fundamental and application-based calculations.

  3. Improving immediate newborn care practices in Philippine hospitals: impact of a national quality of care initiative 2008-2015.

    Science.gov (United States)

    Silvestre, Maria Asuncion A; Mannava, Priya; Corsino, Marie Ann; Capili, Donna S; Calibo, Anthony P; Tan, Cynthia Fernandez; Murray, John C S; Kitong, Jacqueline; Sobel, Howard L

    2018-03-31

    To determine whether intrapartum and newborn care practices improved in 11 large hospitals between 2008 and 2015. Secondary data analysis of observational assessments conducted in 11 hospitals in 2008 and 2015. Eleven large government hospitals from five regions in the Philippines. One hundred and seven randomly sampled postpartum mother-baby pairs in 2008 and 106 randomly sampled postpartum mothers prior to discharge from hospitals after delivery. A national initiative to improve quality of newborn care starting in 2009 through development of a standard package of intrapartum and newborn care services, practice-based training, formation of multidisciplinary hospital working groups, and regular assessments and meetings in hospitals to identify actions to improve practices, policies and environments. Quality improvement was supported by policy development, health financing packages, health facility standards, capacity building and health communication. Sixteen intrapartum and newborn care practices. Between 2008 and 2015, initiation of drying within 5 s of birth, delayed cord clamping, dry cord care, uninterrupted skin-to-skin contact, timing and duration of the initial breastfeed, and bathing deferred until 6 h after birth all vastly improved (P<0.001). The proportion of newborns receiving hygienic cord handling and the hepatitis B birth dose decreased by 11-12%. Except for reduced induction of labor, inappropriate maternal care practices persisted. Newborn care practices have vastly improved through an approach focused on improving hospital policies, environments and health worker practices. Maternal care practices remain outdated largely due to the ineffective didactic training approaches adopted for maternal care.

  4. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  5. Approach to Improve Speed of Sound Calculation within PC-SAFT Framework

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Maribo-Mogensen, Bjørn; Thomsen, Kaj

    2012-01-01

    An extensive comparison of SRK, CPA and PC-SAFT for speed of sound in normal alkanes has been performed. The results reveal that PC-SAFT captures the curvature of speed of sound better than cubic EoS but the accuracy is not satisfactory. Two approaches have been proposed to improve PC-SAFT’s accu...... keeping acceptable accuracy for the primary properties, i.e. vapor pressure (2.1%) and liquid density (1.5%). The two approaches have also been applied to methanol, and both give very good results.......An extensive comparison of SRK, CPA and PC-SAFT for speed of sound in normal alkanes has been performed. The results reveal that PC-SAFT captures the curvature of speed of sound better than cubic EoS but the accuracy is not satisfactory. Two approaches have been proposed to improve PC......-SAFT’s accuracy for speed of sound: (i) putting speed of sound data into parameter estimation; (ii) putting speed of sound data into both universal constants regression and parameter estimation. The results have shown that the second approach can significantly improve the speed of sound (3.2%) prediction while...

  6. A bio-economic model to improve profitability in a large national beef cattle population

    Directory of Open Access Journals (Sweden)

    Javier López-Paredes

    2017-12-01

    Full Text Available A bio-economic model was developed for estimating economic values for use in improving profitability in a large national beef cattle population from birth to slaughter. Results were divided into fattening costs, production costs and income. Economic values were derived for 17 traits for two regions, mature weight (-0.43 € and -0.38 €/+1 kg of live weight, age at first calving (-0.13 € and -0.11 €/+1d, calving interval (-1.06 € and -1.02 €/+1d, age at last calving (0.03 € and 0.03 €/+1d, mortality 0-48 h (-5.86 € and -5.63 €/1% calves per cow and year, pre-weaning mortality (-5.96 € and -5.73 €/+1% calves per cow and year, fattening mortality (-8.23 € and -7.88 €/+1% calves per cow and year, adult mortality (-8.92 € and -7.34 €/+1% adult cows per cow and year, pre-weaning average daily gain (2.56 € and 2.84 €/+10g/d, fattening young animals average daily gain (2.65 € and 3.00 €/+10g/d, culled cow in fattening average daily gain (0.25 € and 0.16 €/+10g/d, culled cow dressing carcass percentage (3.09 € and 2.42 €/+1%, culled cow price (4.59 € and 3.59 €/+0.06 €/kg, carcass conformation score (16.39 € and 15.3 €/+1 SEUROP class, dressing carcass rate of calf (18.22 € and 18.23 €/+1%, carcass growth (9.00 € and 10.09 €/+10g of carcass weight/d and age at slaughter (0.27 € and 0.44 €/+1d. Two sample herds were used to show the economic impact of calving interval and age at first calving shortening in the profit per slaughtered young animal, which was 178 € and 111 € for Herds A and B, respectively. The economic values of functional traits were reduced and production traits were enhanced when fertility traits were improved. The model could be applied in a Spanish national program.

  7. A bio-economic model to improve profitability in a large national beef cattle population

    Energy Technology Data Exchange (ETDEWEB)

    López-Paredes, J.; Jiménez-Montero, J.A.; Pérez-Cabal, M.A.; González-Recio, O.; Alenda, R.

    2017-07-01

    A bio-economic model was developed for estimating economic values for use in improving profitability in a large national beef cattle population from birth to slaughter. Results were divided into fattening costs, production costs and income. Economic values were derived for 17 traits for two regions, mature weight (-0.43 € and -0.38 €/+1 kg of live weight), age at first calving (-0.13 € and -0.11 €/+1d), calving interval (-1.06 € and -1.02 €/+1d), age at last calving (0.03 € and 0.03 €/+1d), mortality 0-48 h (-5.86 € and -5.63 €/1% calves per cow and year), pre-weaning mortality (-5.96 € and -5.73 €/+1% calves per cow and year), fattening mortality (-8.23 € and -7.88 €/+1% calves per cow and year), adult mortality (-8.92 € and -7.34 €/+1% adult cows per cow and year), pre-weaning average daily gain (2.56 € and 2.84 €/+10g/d), fattening young animals average daily gain (2.65 € and 3.00 €/+10g/d), culled cow in fattening average daily gain (0.25 € and 0.16 €/+10g/d), culled cow dressing carcass percentage (3.09 € and 2.42 €/+1%), culled cow price (4.59 € and 3.59 €/+0.06 €/kg), carcass conformation score (16.39 € and 15.3 €/+1 SEUROP class), dressing carcass rate of calf (18.22 € and 18.23 €/+1%), carcass growth (9.00 € and 10.09 €/+10g of carcass weight/d) and age at slaughter (0.27 € and 0.44 €/+1d). Two sample herds were used to show the economic impact of calving interval and age at first calving shortening in the profit per slaughtered young animal, which was 178 € and 111 € for Herds A and B, respectively. The economic values of functional traits were reduced and production traits were enhanced when fertility traits were improved. The model could be applied in a Spanish national program.

  8. A bio-economic model to improve profitability in a large national beef cattle population

    International Nuclear Information System (INIS)

    López-Paredes, J.; Jiménez-Montero, J.A.; Pérez-Cabal, M.A.; González-Recio, O.; Alenda, R.

    2017-01-01

    A bio-economic model was developed for estimating economic values for use in improving profitability in a large national beef cattle population from birth to slaughter. Results were divided into fattening costs, production costs and income. Economic values were derived for 17 traits for two regions, mature weight (-0.43 € and -0.38 €/+1 kg of live weight), age at first calving (-0.13 € and -0.11 €/+1d), calving interval (-1.06 € and -1.02 €/+1d), age at last calving (0.03 € and 0.03 €/+1d), mortality 0-48 h (-5.86 € and -5.63 €/1% calves per cow and year), pre-weaning mortality (-5.96 € and -5.73 €/+1% calves per cow and year), fattening mortality (-8.23 € and -7.88 €/+1% calves per cow and year), adult mortality (-8.92 € and -7.34 €/+1% adult cows per cow and year), pre-weaning average daily gain (2.56 € and 2.84 €/+10g/d), fattening young animals average daily gain (2.65 € and 3.00 €/+10g/d), culled cow in fattening average daily gain (0.25 € and 0.16 €/+10g/d), culled cow dressing carcass percentage (3.09 € and 2.42 €/+1%), culled cow price (4.59 € and 3.59 €/+0.06 €/kg), carcass conformation score (16.39 € and 15.3 €/+1 SEUROP class), dressing carcass rate of calf (18.22 € and 18.23 €/+1%), carcass growth (9.00 € and 10.09 €/+10g of carcass weight/d) and age at slaughter (0.27 € and 0.44 €/+1d). Two sample herds were used to show the economic impact of calving interval and age at first calving shortening in the profit per slaughtered young animal, which was 178 € and 111 € for Herds A and B, respectively. The economic values of functional traits were reduced and production traits were enhanced when fertility traits were improved. The model could be applied in a Spanish national program.

  9. Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)

    2017-07-15

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  10. Risk assessment calculations using MEPAS, an accepted screening methodology, and an uncertainty analysis for the reranking of Waste Area Groupings at Oak Ridge National Laboratory, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    Shevenell, L.; Hoffman, F.O.; MacIntosh, D.

    1992-03-01

    The Waste Area Groupings (WAGs) at the Oak Ridge National Laboratory (ORNL) were reranked with respect to on- and off-site human health risks using two different methods. Risks associated with selected contaminants from each WAG for occupants of WAG 2 or an off-site area were calculated using a modified formulation of the Multimedia Environmental Pollutant Assessment System (MEPAS) and a method suitable for screening, referred to as the ORNL/ESD method (the method developed by the Environmental Sciences Division at ORNL) in this report. Each method resulted in a different ranking of the WAGs. The rankings from the two methods are compared in this report. All risk assessment calculations, except the original MEPAS calculations, indicated that WAGs 1; 2, 6, 7 (WAGs 2, 6 and 7 as one combined WAG); and 4 pose the greatest potential threat to human health. However, the overall rankings of the WAGs using constant parameter values in the different methods were inconclusive because uncertainty in parameter values can change the calculated risk associated with particular pathways, and hence, the final rankings. Uncertainty analysis using uncertainties about all model parameters were used to reduce biases associated with parameter selection and to more reliably rank waste sites according to potential risks associated with site contaminants. Uncertainty analysis indicates that the WAGs should be considered for further investigation, or remediation, in the following order: (1) WAG 1; (2) WAGs 2, 6, and 7 (combined); and 4; (3) WAGs 3, 5, and 9; and, (4) WAG 8

  11. Improved algorithms for the calculation of resolved resonance cross sections with applications to the structural Doppler effect in fast reactors

    International Nuclear Information System (INIS)

    Hwang, R.N.; Toppel, B.J.; Henryson, H. II.

    1980-10-01

    Motivated by a need for an economical yet rigorous tool which can address the computation of the structural material Doppler effect, an extremely efficient improved RABANL capability has been developed utilizing the fact that the Doppler broadened line shape functions become essentially identical to the natural line shape functions or Lorentzian limits beyond about 100 Doppler widths from the resonance energy, or when the natural width exceeds about 200 Doppler widths. The computational efficiency has been further enhanced by preprocessing or screening a significant number of selected resonances during library preparation into composition and temperature independent smooth background cross sections. The resonances which are suitable for such pre-processing are those which are either very broad or those which are very weak. The former contribute very little to the Doppler effect and their self-shielding effect can readily be averaged into slowly varying background cross section data, while the latter contribute very little to either the Doppler or to self-shielding effects. To illustrate the accuracy and efficiency of the improved RABANL algorithms and resonance screening techniques, calculations have been performed for two systems, the first with a composition typical of the STF converter region and the second typical of an LMFBR core composition. Excellent agreement has been found for RABANL compared to the reference Monte Carlo solution obtained using the code VIM, and improved results have also been obtained for the narrow resonance approximation in the ultra-fine-group option of MC 2 -2

  12. Matching fully differential NNLO calculations and parton showers

    Energy Technology Data Exchange (ETDEWEB)

    Alioli, Simone; Bauer, Christian W.; Berggren, Calvin; Walsh, Jonathan R.; Zuberi, Saba [California Univ., Berkeley, CA (United States). Ernest Orlando Lawrence Berkeley National Laboratory; Tackmann, Frank J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-11-15

    We present a general method to match fully differential next-to-next-to-leading (NNLO) calculations to parton shower programs. We discuss in detail the perturbative accuracy criteria a complete NNLO+PS matching has to satisfy. Our method is based on consistently improving a given NNLO calculation with the leading-logarithmic (LL) resummation in a chosen jet resolution variable. The resulting NNLO+LL calculation is cast in the form of an event generator for physical events that can be directly interfaced with a parton shower routine, and we give an explicit construction of the input ''Monte Carlo cross sections'' satisfying all required criteria. We also show how other proposed approaches naturally arise as special cases in our method.

  13. An Accurate Technique for Calculation of Radiation From Printed Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sorensen, Stig B.; Jorgensen, Erik

    2011-01-01

    The accuracy of various techniques for calculating the radiation from printed reflectarrays is examined, and an improved technique based on the equivalent currents approach is proposed. The equivalent currents are found from a continuous plane wave spectrum calculated by use of the spectral dyadic...... Green's function. This ensures a correct relation between the equivalent electric and magnetic currents and thus allows an accurate calculation of the radiation over the entire far-field sphere. A comparison to DTU-ESA Facility measurements of a reference offset reflectarray designed and manufactured...

  14. A national study of nurse leadership and supports for quality improvement in rural hospitals.

    Science.gov (United States)

    Paez, Kathryn; Schur, Claudia; Zhao, Lan; Lucado, Jennifer

    2013-01-01

    This study assessed the perceptions and actions of rural hospital nurse executives with regard to patient safety and quality improvement (QI). A national sample of rural hospital nurse executives (n = 300) completed a survey measuring 4 domains related to patient safety and QI: (a) patient "Safety Culture," (b) adequacy of QI "Resources," (c) "Barriers" related to QI, and (d) "Nurse Leader Engagement" in activities supporting QI. Perceptions of Safety Culture were strong but 47% of the Resources needed to carry out QI were inadequate, 29% of Barriers were moderate to major, and 25% of Nurse Leader Engagement activities were performed infrequently. Nurse Leader Engagement in quality-related activities was less frequent among nurses in isolated and small rural town hospitals compared with large rural city hospitals. To further QI, rural nurse executives may need to use their communications and actions to raise the visibility of QI.

  15. Monitoring of Greenhouse Gases in the Netherlands: uncertainty and priorities for improvement ; Proceedings of a national workshop held in Bilthoven, 1 September 1999

    NARCIS (Netherlands)

    Amstel AR van; Olivier JGJ; Ruyssenaars PG; WIMEK; LAE

    2004-01-01

    A workshop was organised in the Netherlands on 1 September 1999 to improve the National System for Monitoring Greenhouse Gas Emissions. These are the proceedings, including discussion papers, presentations of speakers, reports of discussions and conclusions. It was the task of this workshop to

  16. Neutron Thermal Cross Sections, Westcott Factors, Resonance Integrals, Maxwellian Averaged Cross Sections and Astrophysical Reaction Rates Calculated from the ENDF/B-VII.1, JEFF-3.1.2, JENDL-4.0, ROSFOND-2010, CENDL-3.1 and EAF-2010 Evaluated Data Libraries

    Science.gov (United States)

    Pritychenko, B.; Mughabghab, S. F.

    2012-12-01

    We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present paper contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.

  17. Extensions to the coupling coefficient calculations for muon telescopes

    International Nuclear Information System (INIS)

    Baker, C.P.; Humble, J.E.; Duldig, M.L.

    1989-01-01

    The calculation of coupling coefficients for muon telescopes has previously used interpolation from a limited set of asymptotic directions of arrival of primary particles. Furthermore, these calculations have not incorporated curvature of the atmosphere and thus diverge from the true response at zenith angles greater than about 75 degrees. The necessary extensions to calculate coupling coefficients at arbitrary zenith angles are given, including an improved method of incorporating the asymptotic directions of the primary particles. It is shown, using this method, that certain coupling coefficients are highly sensitive to small changes in asymptotic directions for some telescope configurations. 10 refs., 1 fig., 3 tabs

  18. Extensions to the coupling coefficient calculations for muon telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Baker, C P; Humble, J E [Tasmania Univ., Sandy Bay (Australia). Dept. of Physics; Duldig, M L [Dept. of the Arts, Sport, the Environment, Tourism and Territories, Hobart (Australia). Antarctic Div.

    1989-01-01

    The calculation of coupling coefficients for muon telescopes has previously used interpolation from a limited set of asymptotic directions of arrival of primary particles. Furthermore, these calculations have not incorporated curvature of the atmosphere and thus diverge from the true response at zenith angles greater than about 75 degrees. The necessary extensions to calculate coupling coefficients at arbitrary zenith angles are given, including an improved method of incorporating the asymptotic directions of the primary particles. It is shown, using this method, that certain coupling coefficients are highly sensitive to small changes in asymptotic directions for some telescope configurations. 10 refs., 1 fig., 3 tabs.

  19. Short-term variations in core surface flow resolved from an improved method of calculating observatory monthly means

    Science.gov (United States)

    Olsen, Nils; Whaler, Kathryn A.; Finlay, Christopher C.

    2014-05-01

    Monthly means of the magnetic field measurements taken by ground observatories are a useful data source for studying temporal changes of the core magnetic field and the underlying core flow. However, the usual way of calculating monthly means as the arithmetic mean of all days (geomagnetic quiet as well as disturbed) and all local times (day and night) may result in contributions from external (magnetospheric and ionospheric) origin in the (ordinary, omm) monthly means. Such contamination makes monthly means less favourable for core studies. We calculated revised monthly means (rmm), and their uncertainties, from observatory hourly means using robust means and after removal of external field predictions, using an improved method for characterising the magnetospheric ring current. The utility of the new method for calculating observatory monthly means is demonstrated by inverting their first differences for core surface advective flows. The flow is assumed steady over three consecutive months to ensure uniqueness; the effects of more rapid changes should be attenuated by the weakly conducting mantle. Observatory data are inverted directly for a regularised core flow, rather than deriving it from a secular variation spherical harmonic model. The main field is specified by the CHAOS-4 model. Data from up to 128 observatories between 1997 and 2013 were used to calculate 185 flow models from the omm and rmm, for each possible set of three consecutive months. The full 3x3 (non-diagonal) data covariance matrix was used, and two-norm (least squares) minimisation performed. We are able to fit the data to the target (weighted) misfit of 1, for both omm and rmm inversions, provided we incorporate the full data covariance matrix, and produce consistent, plausible flows. Fits are better for rmm flows. The flows exhibit noticeable changes over timescales of a few months. However, they follow rapid excursions in the omm that we suspect result from external field contamination

  20. National Ignition Facility subsystem design requirements NIF site improvements SSDR 1.2.1

    International Nuclear Information System (INIS)

    Kempel, P.; Hands, J.

    1996-01-01

    This Subsystem Design Requirements (SSDR) document establishes the performance, design, and verification requirements associated with the NIF Project Site at Lawrence Livermore National Laboratory (LLNL) at Livermore, California. It identifies generic design conditions for all NIF Project facilities, including siting requirements associated with natural phenomena, and contains specific requirements for furnishing site-related infrastructure utilities and services to the NIF Project conventional facilities and experimental hardware systems. Three candidate sites were identified as potential locations for the NIF Project. However, LLNL has been identified by DOE as the preferred site because of closely related laser experimentation underway at LLNL, the ability to use existing interrelated infrastructure, and other reasons. Selection of a site other than LLNL will entail the acquisition of site improvements and infrastructure additional to those described in this document. This SSDR addresses only the improvements associated with the NIF Project site located at LLNL, including new work and relocation or demolition of existing facilities that interfere with the construction of new facilities. If the Record of Decision for the PEIS on Stockpile Stewardship and Management were to select another site, this SSDR would be revised to reflect the characteristics of the selected site. Other facilities and infrastructure needed to support operation of the NIF, such as those listed below, are existing and available at the LLNL site, and are not included in this SSDR. Office Building. Target Receiving and Inspection. General Assembly Building. Electro- Mechanical Shop. Warehousing and General Storage. Shipping and Receiving. General Stores. Medical Facilities. Cafeteria services. Service Station and Garage. Fire Station. Security and Badging Services

  1. Advances in neutronics calculation of fast neutron reactors - Demonstration on Super-Phenix reactor

    International Nuclear Information System (INIS)

    Czernecki, Sebastien

    1998-01-01

    The fast reactor european neutronics calculations system, ERANOS, has integrated recent improvements both in nuclear data, with the use of the adjusted nuclear library ERALIB 1 from the JEF2.2 library, and calculation methods, with the use of the new european cell code, ECCO, and the deterministic code, TGV/VARIANT. This code performs full 3-D reactor calculation in the transport theory with variational method. The aim of this work is to create and validate a new calculational scheme for fast spectrum systems offering good compromise between accuracy and running time. The new scheme is based on these improvements plus a special procedure accounting for control rod heterogeneity, which uses a reactivity equivalence homogenization. The new scheme has been validated by means of experiment/calculation comparisons, using the extensive start-up program measurements performed in Super-Phenix reactor. The validation uses also recent measurements performed in the Phenix reactor. The results are very satisfactory and show a significant improvement for almost all core parameters, especially for critical mass, control rod worth and radial subassembly power distribution. A detailed analysis of the discrepancies between the old scheme and the new one for this parameter allows to understand the separate effects of methods and nuclear data on the radial power distribution shape. (author) [fr

  2. Establishment of calculated panel reactive antibody and its potential benefits in improving the kidney allocation strategy in Taiwan

    Directory of Open Access Journals (Sweden)

    Ssu-Wen Shen

    2017-12-01

    Full Text Available Background/Purpose: Renal transplant candidates who are highly sensitized to human leukocyte antigens (HLAs tend to wait longer to find a matched donor and have poor outcomes. Most organ-sharing programs prioritize highly sensitized patients in the allocation scoring system. The HLA sensitization status is traditionally evaluated by the panel-reactive antibody (PRA assay. However, this assay is method dependent and does not consider the ethnic differences in HLA frequencies. A calculated PRA (cPRA, based on a population's HLA frequency and patients' unacceptable antigens (UAs, correctly estimates the percentage of donors suitable for candidates. The Taiwan Organ Registry and Sharing Center does not prioritize sensitized patients. We propose that the incorporation of the cPRA and UAs into the renal allocation program will improve the local kidney allocation policy. Methods: We established a cPRA calculator using 6146 Taiwanese HLA-A, -B, -C, -DR, and -DQ phenotypes. We performed simulated allocation based on the concept of acceptable mismatch for 76 candidates with cPRA values exceeding 80%. Results: We analyzed 138 waitlisted renal transplant candidates at our hospital, and we determined that the concordance rate of the cPRA and PRA for highly sensitized (%PRA > 80% candidates was 92.5%, which decreased to 20% for those with %PRA < 80%. We matched 76 highly sensitized patients based on acceptable mismatch with the HLA phenotypes of 93 cadaver donors. Forty-six patients (61% found at least one suitable donor. Conclusion: The application of the cPRA and acceptable mismatch can benefit highly sensitized patients and reduce positive lymphocyte cytotoxicity crossmatch. Keywords: Kidney transplantation, Human leukocyte antigen, CPRA

  3. Procedures for Calculating Residential Dehumidification Loads

    Energy Technology Data Exchange (ETDEWEB)

    Winkler, Jon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Booten, Chuck [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-06-01

    Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity. The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.

  4. Calculation of induced activity in the V-230 reactor

    International Nuclear Information System (INIS)

    Bouhahhane, A.; Farkas, G.

    2013-01-01

    In this paper, we focused on the calculation of the neutron induced activity of nuclear reactor components for decommissioning purposes. The results confirm, that the most important radionuclides in the reactor components dismantling process are 55 Fe (1 st decade), 60 Co (10 - 50 y) and 63 Ni (during the whole process). Another aim of this paper was to refer to the possibility to improve the accuracy of the calculations using continuous energy Monte Carlo methods. (authors)

  5. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Youngsuk, E-mail: ysbang00@fnctech.com [FNC Technology, Co. Ltd., Yongin-si (Korea, Republic of); Abdel-Khalik, Hany S., E-mail: abdelkhalik@purdue.edu [Purdue University, West Lafayette, IN (United States); Jessee, Matthew A., E-mail: jesseema@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Mertyurek, Ugur, E-mail: mertyurek@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2015-12-15

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  6. Hybrid reduced order modeling for assembly calculations

    International Nuclear Information System (INIS)

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur

    2015-01-01

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  7. Using Participatory Approach to Improve Availability of Spatial Data for Local Government

    Science.gov (United States)

    Kliment, T.; Cetl, V.; Tomič, H.; Lisiak, J.; Kliment, M.

    2016-09-01

    Nowadays, the availability of authoritative geospatial features of various data themes is becoming wider on global, regional and national levels. The reason is existence of legislative frameworks for public sector information and related spatial data infrastructure implementations, emergence of support for initiatives as open data, big data ensuring that online geospatial information are made available to digital single market, entrepreneurs and public bodies on both national and local level. However, the availability of authoritative reference spatial data linking the geographic representation of the properties and their owners are still missing in an appropriate quantity and quality level, even though this data represent fundamental input for local governments regarding the register of buildings used for property tax calculations, identification of illegal buildings, etc. We propose a methodology to improve this situation by applying the principles of participatory GIS and VGI used to collect observations, update authoritative datasets and verify the newly developed datasets of areas of buildings used to calculate property tax rates issued to their owners. The case study was performed within the district of the City of Požega in eastern Croatia in the summer 2015 and resulted in a total number of 16072 updated and newly identified objects made available online for quality verification by citizens using open source geospatial technologies.

  8. Internal dose conversion factors for calculation of dose to the public

    International Nuclear Information System (INIS)

    1988-07-01

    This publication contains 50-year committed dose equivalent factors, in tabular form. The document is intended to be used as the primary reference by the US Department of Energy (DOE) and its contractors for calculating radiation dose equivalents for members of the public, resulting from ingestion or inhalation of radioactive materials. Its application is intended specifically for such materials released to the environment during routine DOE operations, except in those instances where compliance with 40 CFR 61 (National Emission Standards for Hazardous Air Pollutants) requires otherwise. However, the calculated values may be equally applicable to unusual releases or to occupational exposures. The use of these committed dose equivalent tables should ensure that doses to members of the public from internal exposures are calculated in a consistent manner at all DOE facilities

  9. Coil protection calculator for TFTR

    International Nuclear Information System (INIS)

    Marsala, R.J.; Woolley, R.D.

    1987-01-01

    A new coil protection calculator (CPC) is presented in this paper. It is now being developed for TFTR's magnetic field coils will replace the existing coil fault detector. The existing fault detector sacrifices TFTR operating capability for simplicity. The new CPC will permit operation up to the actual coil limits by accurately and continuously computing coil parameters in real-time. The improvement will allow TFTR to operate with higher plasma currents and will permit the optimization of pulse repetition rates

  10. Calculation and interpretation of In-Situ measurements of initial radiations at Hiroshima and Nagasaki

    International Nuclear Information System (INIS)

    Loewe, W.E.

    1983-01-01

    Cobalt activation calculations will be reviewed, and similar comparisons of sulfur activation interior to electrical insulators on power transmission lines will be discussed. The relationship between neutron tissue kermas one to two kilometers from hypocenter and the particular activations of cobalt and sulfur are reviewed. At present, measured and calculated quantities agree within associated uncertainties, which are substantial. Additional work to shrink these uncertainties will be discussed. Particular cobalt activation topics will include: the sensitivity to thermal neutrons outside the pillar; calculated values using actual Nagasaki concrete composition; and calculational advances to improve modelling of the actual configuration. Particular sulfur activation topics will include: absolute comparisons of measured and calculated ratios of dpm/gm of 32 P at all measured ranges, based on approximate experimental values for insulator attentuation and source radiations; the relationship between sulfur activation within a kilometer of hypocenter and kermas at two kilometers; and calculational advances to improve modelling of the actual configuration

  11. Combining Basic Business Math and Electronic Calculators.

    Science.gov (United States)

    Merchant, Ronald

    As a means of alleviating math anxiety among business students and of improving their business machine skills, Spokane Falls Community College offers a course in which basic business math skills are mastered through the use of desk top calculators. The self-paced course, which accommodates varying student skill levels, requires students to: (1)…

  12. Radiation effect calculation means of the Crisis Technical Center of the Nuclear Safety and Protection Institut

    International Nuclear Information System (INIS)

    Crabol, B.; Manesse, D.; Robeau, D.

    1989-07-01

    The available calculation tools of the Crisis Technical Center (CTC), for the analysis and evaluation of radiation effects from a nuclear accident, are presented. The CTC calculation unit depends on local means, and on the National Meteorology system, in order to collect the data needed for the atmospheric waste diffusion evaluation. For the radiation dose calculations, plotters and software allowing the analysis of all waste Kinetics and all the meteorological conditions are available. The work developed by CTC calculation unit enables an easy application of the calculation tools as well as the results obtention. Images from data bases are provided to complete the obtained results [fr

  13. Construction of voxel head phantom and application to BNCT dose calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Choon Sik; Lee, Choon Ik; Lee, Jai Ki [Hanyang Univ., Seoul (Korea, Republic of)

    2001-06-15

    Voxel head phantom for overcoming the limitation of mathematical phantom in depicting anatomical details was constructed and example dose calculation for BNCT was performed. The repeated structure algorithm of the general purpose Monte Carlo code, MCNP4B was applied for voxel Monte Carlo calculation. Simple binary voxel phantom and combinatorial geometry phantom composed of two materials were constructed for validating the voxel Monte Carlo calculation system. The tomographic images of VHP man provided by NLM(National Library of Medicine) were segmented and indexed to construct voxel head phantom. Comparison od doses for broad parallel gamma and neutron beams in AP and PA directions showed decrease of brain dose due to the attenuation of neutron in eye balls in case of voxel head phantom. The spherical tumor volume with diameter, 5cm was defined in the center of brain for BNCT dose calculation in which accurate 3 dimensional dose calculation is essential. As a result of BNCT dose calculation for downward neutron beam of 10keV and 40keV, the tumor dose is about doubled when boron concentration ratio between the tumor to the normal tissue is 30{mu}g/g to 3 {mu}g/g. This study established the voxel Monte Carlo calculation system and suggested the feasibility of precise dose calculation in therapeutic radiology.

  14. Suicide rates in the national and expatriate population in Dubai, United Arab Emirates.

    Science.gov (United States)

    Dervic, Kanita; Amiri, Leena; Niederkrotenthaler, Thomas; Yousef, Said; Salem, Mohamed O; Voracek, Martin; Sonneck, Gernot

    2012-11-01

    Reports on suicide from the Gulf region are scarce. Dubai is a city with a large expatriate population. However, total and gender-specific suicide rates for the national and expatriate populations are not known. To investigate total and gender-specific suicide rates in the national and expatriate population in Dubai and to elicit socio-demographic characteristics of suicide victims. Registered suicides in Dubai from 2003 to 2009, and aggregated socio-demographic data of suicide victims were analysed. Suicide rates per 100,000 population were calculated. Suicide rate among expatriates (6.3/100,000) was seven times higher than the rate among the nationals (0.9/100,000). In both groups, male suicide rate was more than three times higher than the female rate. Approximately three out of four expatriate suicides were committed by Indians. The majority of suicide victims were male, older than 30 years, expatriate, single and employed, with an education of secondary school level and below. Further research on risk factors for and protective factors against suicide, particularly among the expatriate population, is needed. Epidemiological monitoring of suicide trends at the national level and improvement of UAE suicide statistics would provide useful information for developing suicide prevention strategies.

  15. Profiling Student Use of Calculators in the Learning of High School Mathematics

    Science.gov (United States)

    Crowe, Cheryll E.; Ma, Xin

    2010-01-01

    Using data from the 2005 National Assessment of Educational Progress, students' use of calculators in the learning of high school mathematics was profiled based on their family background, curriculum background, and advanced mathematics coursework. A statistical method new to educational research--classification and regression trees--was applied…

  16. Updated Collisional Ionization Equilibrium Calculated for Optically Thin Plasmas

    Science.gov (United States)

    Savin, Daniel Wolf; Bryans, P.; Badnell, N. R.; Gorczyca, T. W.; Laming, J. M.; Mitthumsiri, W.

    2010-03-01

    Reliably interpreting spectra from electron-ionized cosmic plasmas requires accurate ionization balance calculations for the plasma in question. However, much of the atomic data needed for these calculations have not been generated using modern theoretical methods and their reliability are often highly suspect. We have carried out state-of-the-art calculations of dielectronic recombination (DR) rate coefficients for the hydrogenic through Na-like ions of all elements from He to Zn as well as for Al-like to Ar-like ions of Fe. We have also carried out state-of-the-art radiative recombination (RR) rate coefficient calculations for the bare through Na-like ions of all elements from H to Zn. Using our data and the recommended electron impact ionization data of Dere (2007), we present improved collisional ionization equilibrium calculations (Bryans et al. 2006, 2009). We compare our calculated fractional ionic abundances using these data with those presented by Mazzotta et al. (1998) for all elements from H to Ni. This work is supported in part by the NASA APRA and SHP SR&T programs.

  17. Cost calculation model concerning small-scale production of chips and split firewood

    International Nuclear Information System (INIS)

    Ryynaenen, S.; Naett, H.; Valkonen, J.

    1995-01-01

    The TTS-Institute's Forestry Department has developed a computer-based cost calculation model for the production of wood chips and split firewood. This development work was carried out in conjunction with the nation-wide BIOENERGY -research programme. The said calculation model eases and speeds up the calculation of unit costs and resource needs in harvesting systems for wood chips and split firewood. The model also enables the user to find out how changes in the productivity and costs bases of different harvesting chains influences the unit costs of the system as a whole. The undertaking was composed of the following parts: clarification and modification of productivity bases for application in the model as mathematical models, clarification of machine and device costs bases, designing of the structure and functions of the calculation model, construction and testing of the model's 0-version, model calculations concerning typical chains, review of calculation bases, and charting of development needs focusing on the model. The calculation model was developed to serve research needs, but with further development it could be useful as a tool in forestry and agricultural extension work, related schools and colleges, and in the hands of firewood producers. (author)

  18. The Beginning Lecture and the Improvement of “Experiments in Innovative Chemistry” as an Entry Subjects at the Department of Biochemistry and Applied Chemistry in National College of Technology

    Science.gov (United States)

    Tsuda, Yusuke; Nakashima, Hiroyuki; Tsuji, Yutaka; Watanabe, Katsuhiro; Ooka, Hisako

    The beginning lecture and the improvement of “Experiments in Innovative Chemistry” as an entry subjects in the Department of Biochemistry and Applied Chemistry at Kurume National College of Technology has been performed for recent three years. Every experiment was selected to sustain the young student's interest. The questionnaires were performed after first two year's programs were finished, and some of projects were improved. This subject has a good reputation for students and teachers, and seems to be very effective for the first year students of national college of technology.

  19. Seismic hazard studies for the high flux beam reactor at Brookhaven National Laboratory

    International Nuclear Information System (INIS)

    Costantino, C.J.; Heymsfield, E.; Park, Y.J.; Hofmayer, C.H.

    1991-01-01

    This paper presents the results of a calculation to determine the site specific seismic hazard appropriate for the deep soil site at Brookhaven National Laboratory (BNL) which is to be used in the risk assessment studies being conducted for the High Flux Beam Reactor (HFBR). The calculations use as input the seismic hazard defined for the bedrock outcrop by a study conducted at Lawrence Livermore National Laboratory (LLNL). Variability in site soil properties were included in the calculations to obtain the seismic hazard at the ground surface and compare these results with those using the generic amplification factors from the LLNL study

  20. Recent progress and developments in LWR-PV calculational methodology

    International Nuclear Information System (INIS)

    Maerker, R.E.; Broadhead, B.L.; Williams, M.L.

    1984-01-01

    New and improved techniques for calculating beltline surveillance activities and pressure vessel fluences with reduced uncertainties have recently been developed. These techniques involve the combining of monitored in-core power data with diffusion theory calculated pin-by-pin data to yield absolute source distributions in R-THETA and R-Z geometries suitable for discrete ordinate transport calculations. Effects of finite core height, whenever necessary, can be considered by the use of a three-dimensional fluence rate synthesis procedure. The effects of a time-dependent spatial source distribution may be readily evaluated by applying the concept of the adjoint function, and simplifying the procedure to such a degree that only one forward and one adjoint calculation are required to yield all the dosimeter activities for all beltline surveillance locations at once. The addition of several more adjoint calculations using various fluence rates as responses is all that is needed to determine all the pressure vessel group fluences for all beltline locations for an arbitrary source distribution

  1. Three dimensions transport calculations for PWR core

    International Nuclear Information System (INIS)

    Richebois, E.

    2000-01-01

    The objective of this work is to define improved 3-D core calculation methods based on the transport theory. These methods can be particularly useful and lead to more precise computations in areas of the core where anisotropy and steep flux gradients occur, especially near interface and boundary conditions and in regions of high heterogeneity (bundle with absorbent rods). In order to apply the transport theory a new method for calculating reflector constants has been developed, since traditional methods were only suited for 2-group diffusion core calculations and could not be extrapolated to transport calculations. In this thesis work, the new method for obtaining reflector constants is derived regardless of the number of energy groups and of the operator used. The core calculations results using the reflector constants thereof obtained have been validated on the EDF's power reactor Saint Laurent B1 with MOX loading. The advantages of a 3-D core transport calculation scheme have been highlighted as opposed to diffusion methods; there are a considerable number of significant effects and potential advantages to be gained in rod worth calculations for instance. These preliminary results obtained with on particular cycle will have to be confirmed by more systematic analysis. Accidents like MSLB (main steam line break) and LOCA (loss of coolant accident) should also be investigated and constitute challenging situations where anisotropy is high and/or flux gradients are steep. This method is now being validated for others EDF's PWRs' reactors, as well as for experimental reactors and other types of commercial reactors. (author)

  2. Do patient surveys work? The influence of a national survey programme on local quality-improvement initiatives.

    Science.gov (United States)

    Reeves, R; Seccombe, I

    2008-12-01

    To assess current attitudes towards the national patient survey programme in England, establish the extent to which survey results are used and identify barriers and incentives for using them. Qualitative interviews with hospital staff responsible for implementing the patient surveys (survey leads). National Health Service (NHS) hospital organisations (trusts) in England. Twenty-four patient survey leads for NHS trusts. Perceptions of the patient surveys were mainly positive and were reported to be improving. Interviewees welcomed the surveys' regular repetition and thought the questionnaires, survey methods and reporting of results, particularly inter-organisational benchmark charts, were of a good standard. The survey results were widely used in action planning and were thought to support organisational patient-centredness. There was variation in the extent to which trusts disseminated survey findings to patients, the public, staff and their board members. The most common barrier to using results was difficulty engaging clinicians because survey findings were not sufficiently specific to specialties, departments or wards. Limited statistical expertise and concerns that the surveys only covered a short time frame also contributed to some scepticism. Other perceived barriers included a lack of knowledge of effective interventions, and limited time and resources. Actual and potential incentives for using survey findings included giving the results higher weightings in the performance management system, financial targets, Payment by Results (PbR), Patient Choice, a patient-centred culture, leadership by senior members of the organisation, and boosting staff morale by disseminating positive survey findings. The national patient surveys are viewed positively, their repetition being an important factor in their success. The results could be used more effectively if they were more specific to smaller units.

  3. Theoretical Aspects of the Technological Leadership of National Economies

    Directory of Open Access Journals (Sweden)

    Dovgal Еlеna A.

    2016-05-01

    Full Text Available The main theoretical approaches to forming and defining the essence of the technological leadership of national economies have been considered. At present the global nature of the technological development factor is the main engine of development of economies in most world countries. Changes and improvement of all the elements of productive forces occur and innovations or new developments appear as a result of technological development. Different approaches to defining the essence of the technological leadership have been considered, the stages of its development, theoretical concepts of different scientific schools regarding the impact of the factor of technological development on the overall economic process of national economies have been analyzed. Characteristics of technological modes have been summarized and their influence on the formation of the technological leadership of the world countries has been revealed. The main approaches to identifying the technological leadership of various countries on the basis of calculation of international indices have been considered. The prospect for further research of the problem is a comprehensive study of determinants that reflect the technological leadership and give a general assessment of qualitative characteristics of the formation of its components

  4. Improved estimation of the variance in Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2008-01-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k eff results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k eff will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k eff are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  5. Improved estimation of the variance in Monte Carlo criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard [Delft University of Technology, Delft (Netherlands)

    2008-07-01

    Results for the effective multiplication factor in a Monte Carlo criticality calculations are often obtained from averages over a number of cycles or batches after convergence of the fission source distribution to the fundamental mode. Then the standard deviation of the effective multiplication factor is also obtained from the k{sub eff} results over these cycles. As the number of cycles will be rather small, the estimate of the variance or standard deviation in k{sub eff} will not be very reliable, certainly not for the first few cycles after source convergence. In this paper the statistics for k{sub eff} are based on the generation of new fission neutron weights during each history in a cycle. It is shown that this gives much more reliable results for the standard deviation even after a small number of cycles. Also attention is paid to the variance of the variance (VoV) and the standard deviation of the standard deviation. A derivation is given how to obtain an unbiased estimate for the VoV, even for a small number of samples. (authors)

  6. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    Science.gov (United States)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS (NESHAP) SUBPART H RADIONUCLIDES POTENTIAL TO EMIT CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    EARLEY JN

    2008-07-23

    This document provides an update of the status of stacks on the Hanford Site and the potential radionuclide emissions, i.e., emissions that could occur with no control devices in place. This review shows the calculations that determined whether the total effective dose equivalent (TEDE) received by the maximum public receptor as a result of potential emissions from any one of these stacks would exceed 0.1 millirem/year. Such stacks require continuous monitoring of the effluent, or other monitoring, to meet the requirements of Washington Administrative code (WAC) 246-247-035(1)(a)(ii) and WAC 246-247-075(1), -(2), and -(6). This revised update reviews the potential-to-emit (PTE) calculations of 31 stacks for Fluor Hanford, Inc. Of those 31 stacks, 11 have the potential to cause a TEDE greater than 0.1 mrem/year.

  8. NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS (NESHAP) SUBPART H; RADIONUCLIDES POTENTIAL-TO-EMIT CALCULATIONS

    International Nuclear Information System (INIS)

    EARLEY JN

    2008-01-01

    This document provides an update of the status of stacks on the Hanford Site and the potential radionuclide emissions, i.e., emissions that could occur with no control devices in place. This review shows the calculations that determined whether the total effective dose equivalent (TEDE) received by the maximum public receptor as a result of potential emissions from any one of these stacks would exceed 0.1 millirem/year. Such stacks require continuous monitoring of the effluent, or other monitoring, to meet the requirements of Washington Administrative code (WAC) 246-247-035(1)(a)(ii) and WAC 246-247-075(1), -(2), and -(6). This revised update reviews the potential-to-emit (PTE) calculations of 31 stacks for Fluor Hanford, Inc. Of those 31 stacks, 11 have the potential to cause a TEDE greater than 0.1 mrem/year

  9. Evaluation of AMPX-KENO benchmark calculations for high-density spent fuel storage racks

    International Nuclear Information System (INIS)

    Turner, S.E.; Gurley, M.K.

    1981-01-01

    The AMPX-KENO computer code package is commonly used to evaluate criticality in high-density spent fuel storage rack designs. Consequently, it is important to know the reliability that can be placed on such calculations and whether or not the results are conservative. This paper evaluates a series of AMPX-KENO calculations which have been made on selected critical experiments. The results are compared with similar analyses reported in the literature by the Oak Ridge National Laboratory and BandW. 8 refs

  10. Physics of societal issues calculations on national security, environment, and energy

    CERN Document Server

    Hafemeister, David

    2014-01-01

    This book provides the reader with essential tools needed to analyze complex societal issues and demonstrates the transition from physics to modern-day laws and treaties. This second edition features new equation-oriented material and extensive data sets drawing upon current information from experts in their fields. Problems to challenge the reader and extend discussion are presented on three timely issues:   •        National Security: Weapons, Offense, Defense, Verification, Nuclear Proliferation, Terrorism •        Environment: Air/Water, Nuclear, Climate Change, EM Fields/Epidemiology •        Energy: Current Energy Situation, Buildings, Solar Buildings, Renewable  Energy, Enhanced End-Use Efficiency, Transportation, Economics   Praise for the first edition: "This insight is needed in Congress and the Executive Branch. Hafemeister, a former Congressional fellow with wide Washington experience, has written a book for physicists, chemists and engineers who want to learn science...

  11. Physics of societal issues calculations on national security, environment, and energy

    CERN Document Server

    Hafemeister, David

    2007-01-01

    Why this book on the Physics of Societal Issues? The subdivisions of physics - nuclear physics, particle physics, condensed-matter physics, biophysics - have their textbooks, while the subdivision of physics and society lacks an equation-oriented text on the physics of arms, energy and the environment. Physics of Societal Issues is intended for undergraduate and doctoral students who may work on applied topics, or who simply want to know why things are the way they are. Decisions guiding policies on nuclear arms, energy and the environment often seem mysterious and contradictory. What is the science behind the deployment of MIRVed ICBMs, the quest for space-based beam weapons, the fear of powerline EM fields, the wholesale acceptance of SUVs, the issues of climactic change, and the failure of the pre-embargo market to produce buildings and appliances that now save over 50 power plants? Physics of Societal Issues is three "mini-texts" in one: National Security (5 chapters): Weapons, offense, defense, verificat...

  12. Recent improvement in organization and in tutorial practices in the National Institute of Nuclear Sciences and Techniques

    International Nuclear Information System (INIS)

    Maziere, D.

    2002-01-01

    The National Institute of Nuclear Sciences and Techniques has recently improved its organization and its tutorial practices to increase efficiency of training. It obtained in 2001 an ISO 9001 certification aiming at a better satisfaction of customers. Moreover, external contributors and INSTN people in charge of pedagogy are strongly encouraged to vary tutorial methods and are proposed to be trained for these new teaching techniques. For next years, trends are not missing to increase efficiency: a better listening to the customers, block-release training, e-learning, increasing European commitments. Nevertheless relevant evaluation of efficiency remains the unresolved issue and this could never be done by the training institution alone. (author)

  13. Dynamical gluon masses in perturbative calculations at the loop level

    International Nuclear Information System (INIS)

    Machado, Fatima A.; Natale, Adriano A.

    2013-01-01

    Full text: In the phenomenology of strong interactions one always has to deal at some extent with the interplay between perturbative and non-perturbative QCD. On one hand, the former has quite developed tools, yielded by asymptotic freedom. On the other, concerning the latter, we nowadays envisage the following scenario: 1) There are strong evidences for a dynamically massive gluon propagator and infrared finite coupling constant; 2) There is an extensive and successful use of an infrared finite coupling constant in phenomenological calculations at tree level; 3) The infrared finite coupling improves the perturbative series convergence; 4) The dynamical gluon mass provides a natural infrared cutoff in the physical processes at the tree level. Considering this scenario it is natural to ask how these non-perturbative results can be used in perturbative calculations of physical observables at the loop level. Recent papers discuss how off-shell gauge and renormalization group invariant Green functions can be computed with the use of the Pinch Technique (PT), with IR divergences removed by the dynamical gluon mass, and using a well defined effective charge. In this work we improve the former results by the authors, which evaluate 1-loop corrections to some two- and three-point functions of SU(3) pure Yang-Mills, investigating the dressing of quantities that could account for an extension of loop calculations to the infrared domain of the theory, in a way applicable to phenomenological calculations. One of these improvements is maintaining the gluon propagator transverse in such a scheme. (author)

  14. Physician involvement enhances coding accuracy to ensure national standards: an initiative to improve awareness among new junior trainees.

    Science.gov (United States)

    Nallasivan, S; Gillott, T; Kamath, S; Blow, L; Goddard, V

    2011-06-01

    Record Keeping Standards is a development led by the Royal College of Physicians of London (RCP) Health Informatics Unit and funded by the National Health Service (NHS) Connecting for Health. A supplementary report produced by the RCP makes a number of recommendations based on a study held at an acute hospital trust. We audited the medical notes and coding to assess the accuracy, documentation by the junior doctors and also to correlate our findings with the RCP audit. Northern Lincolnshire & Goole Hospitals NHS Foundation Trust has 114,000 'finished consultant episodes' per year. A total of 100 consecutive medical (50) and rheumatology (50) discharges from Diana Princess of Wales Hospital from August-October 2009 were reviewed. The results showed an improvement in coding accuracy (10% errors), comparable to the RCP audit but with 5% documentation errors. Physician involvement needs enhancing to improve the effectiveness and to ensure clinical safety.

  15. Biota Modeling in EPA's Preliminary Remediation Goal and Dose Compliance Concentration Calculators for Use in EPA Superfund Risk Assessment: Explanation of Intake Rate Derivation, Transfer Factor Compilation, and Mass Loading Factor Sources

    International Nuclear Information System (INIS)

    Manning, Karessa L.; Dolislager, Fredrick G.; Bellamy, Michael B.

    2016-01-01

    The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency's (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants from soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV's). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.

  16. Full charge-density calculation of the surface energy of metals

    DEFF Research Database (Denmark)

    Vitos, Levente; Kollár, J..; Skriver, Hans Lomholt

    1994-01-01

    of a spherically symmetrized charge density, while the Coulomb and exchange-correlation contributions are calculated by means of the complete, nonspherically symmetric charge density within nonoverlapping, space-filling Wigner-Seitz cells. The functional is used to assess the convergence and the accuracy......We have calculated the surface energy and the work function of the 4d metals by means of an energy functional based on a self-consistent, spherically symmetric atomic-sphere potential. In this approach the kinetic energy is calculated completely within the atomic-sphere approximation (ASA) by means...... of the linear-muffin-tin-orbitals (LMTO) method and the ASA in surface calculations. We find that the full charge-density functional improves the agreement with recent full-potential LMTO calculations to a level where the average deviation in surface energy over the 4d series is down to 10%....

  17. National Environmental Policy Act (NEPA) compliance at Sandia National Laboratories/New Mexico (SNL/NM)

    International Nuclear Information System (INIS)

    Wolff, T.A.

    1998-08-01

    This report on National Environmental Policy Act (NEPA) compliance at Sandia National Laboratories/New Mexico (SNL/NM) chronicles past and current compliance activities and includes a recommended strategy that can be implemented for continued improvement. This report provides a list of important references. Attachment 1 contains the table of contents for SAND95-1648, National Environmental Policy Act (NEPA) Compliance Guide Sandia National Laboratories (Hansen, 1995). Attachment 2 contains a list of published environmental assessments (EAs) and environmental impact statements (EISs) prepared by SNL/NM. Attachment 3 contains abstracts of NEPA compliance papers authored by SNL/NM and its contractors

  18. National Environmental Policy Act (NEPA) compliance at Sandia National Laboratories/New Mexico (SNL/NM)

    Energy Technology Data Exchange (ETDEWEB)

    Wolff, T.A. [Sandia National Labs., Albuquerque, NM (United States). Community Involvement and Issues Management Dept.; Hansen, R.P. [Hansen Environmental Consultants, Englewood, CO (United States)

    1998-08-01

    This report on National Environmental Policy Act (NEPA) compliance at Sandia National Laboratories/New Mexico (SNL/NM) chronicles past and current compliance activities and includes a recommended strategy that can be implemented for continued improvement. This report provides a list of important references. Attachment 1 contains the table of contents for SAND95-1648, National Environmental Policy Act (NEPA) Compliance Guide Sandia National Laboratories (Hansen, 1995). Attachment 2 contains a list of published environmental assessments (EAs) and environmental impact statements (EISs) prepared by SNL/NM. Attachment 3 contains abstracts of NEPA compliance papers authored by SNL/NM and its contractors.

  19. Method to Calculate Accurate Top Event Probability in a Seismic PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong Univ., Seoul (Korea, Republic of)

    2014-05-15

    ACUBE(Advanced Cutset Upper Bound Estimator) calculates the top event probability and importance measures from cutsets by dividing cutsets into major and minor groups depending on the cutset probability, where the cutsets that have higher cutset probability are included in the major group and the others in minor cutsets, converting major cutsets into a Binary Decision Diagram (BDD). By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. ACUBE works by dividing the cutsets into two groups (higher and lower cutset probability groups), calculating the top event probability and importance measures in each group, and combining the two results from the two groups. Here, ACUBE calculates the top event probability and importance measures of the higher cutset probability group exactly. On the other hand, ACUBE calculates these measures of the lower cutset probability group with an approximation such as MCUB. The ACUBE algorithm is useful for decreasing the conservatism that is caused by approximating the top event probability and importance measure calculations with given cutsets. By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. This study shows that careful attention should be paid and an appropriate method be provided in order to avoid the significant overestimation of the top event probability calculation. Due to the strength of ACUBE that is explained in this study, the ACUBE became a vital tool for calculating more accurate CDF of the seismic PSA cutsets than the conventional probability calculation method.

  20. Improved Formulations for Air-Surface Exchanges Related to National Security Needs: Dry Deposition Models

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, James G.

    2006-07-01

    The Department of Homeland Security and others rely on results from atmospheric dispersion models for threat evaluation, event management, and post-event analyses. The ability to simulate dry deposition rates is a crucial part of our emergency preparedness capabilities. Deposited materials pose potential hazards from radioactive shine, inhalation, and ingestion pathways. A reliable characterization of these potential exposures is critical for management and mitigation of these hazards. A review of the current status of dry deposition formulations used in these atmospheric dispersion models was conducted. The formulations for dry deposition of particulate materials from am event such as a radiological attack involving a Radiological Detonation Device (RDD) is considered. The results of this effort are applicable to current emergency preparedness capabilities such as are deployed in the Interagency Modeling and Atmospheric Assessment Center (IMAAC), other similar national/regional emergency response systems, and standalone emergency response models. The review concludes that dry deposition formulations need to consider the full range of particle sizes including: 1) the accumulation mode range (0.1 to 1 micron diameter) and its minimum in deposition velocity, 2) smaller particles (less than .01 micron diameter) deposited mainly by molecular diffusion, 3) 10 to 50 micron diameter particles deposited mainly by impaction and gravitational settling, and 4) larger particles (greater than 100 micron diameter) deposited mainly by gravitational settling. The effects of the local turbulence intensity, particle characteristics, and surface element properties must also be addressed in the formulations. Specific areas for improvements in the dry deposition formulations are 1) capability of simulating near-field dry deposition patterns, 2) capability of addressing the full range of potential particle properties, 3) incorporation of particle surface retention/rebound processes, and

  1. Electron cyclotron heating calculations for ATF

    International Nuclear Information System (INIS)

    Goldfinger, R.C.; Batchelor, D.B.

    1986-03-01

    The RAYS geometrical optics code has been used to calculate electron cyclotron wave propagation and heating in the Advanced Toroidal Facility (ATF) device under construction at Oak Ridge National Laboratory (ORNL). The intent of this work is to predict the outcome of various heating scenarios and to give guidance in designing an optimum heating system. Particular attention is paid to the effects of wave polarization and antenna location. We investigate first and second harmonic cyclotron heating with the parameters predicted for steady-state ATF operation. We also simulate the effect of wall reflections by calculating a uniform, isotropic flux of power radiating from the wall. These results, combined with the first-pass calculations, give a qualitative picture of the heat deposition profiles. From these results we identify the compromises that represent the optimum heating strategies for the ATF model considered here. Our basic conclusions are that second harmonic heating with the extraordinary mode (X-mode) gives the best result, with fundamental ordinary mode (O-mode) heating being slightly less efficient. Assuming the antenna location is restricted to the low magnetic field side, the antenna should be placed at phi = 0 0 (the toroidal angle where the helical coils are at the sides) for fundamental heating and at phi = 15 0 (where the helical coils are at the top and bottom) for second harmonic heating. These recommendations come directly from the ray tracing results as well as from a theoretical identification of the relevant factors affecting the heating

  2. On-site worker-risk calculations using MACCS

    International Nuclear Information System (INIS)

    Peterson, V.L.

    1993-01-01

    We have revised the latest version of MACCS for use with the calculation of doses and health risks to on-site workers for postulated accidents at the Rocky Flats Plant (RFP) in Colorado. The modifications fall into two areas: (1) an improved estimate of shielding offered by buildings to workers that remain indoors; and, (2) an improved treatment of building-wake effects, which affects both indoor and outdoor workers. Because the postulated accident can be anywhere on plant site, user-friendly software has been developed to create those portions of the (revised) MACCS input data files that are specific to the accident site

  3. Geographical heterogeneity and inequality of access to improved drinking water supply and sanitation in Nepal.

    Science.gov (United States)

    He, Wen-Jun; Lai, Ying-Si; Karmacharya, Biraj M; Dai, Bo-Feng; Hao, Yuan-Tao; Xu, Dong Roman

    2018-04-02

    Per United Nations' Sustainable Development Goals, Nepal is aspiring to achieve universal and equitable access to safe and affordable drinking water and provide access to adequate and equitable sanitation for all by 2030. For these goals to be accomplished, it is important to understand the country's geographical heterogeneity and inequality of access to its drinking-water supply and sanitation (WSS) so that resource allocation and disease control can be optimized. We aimed 1) to estimate spatial heterogeneity of access to improved WSS among the overall Nepalese population at a high resolution; 2) to explore inequality within and between relevant Nepalese administrative levels; and 3) to identify the specific administrative areas in greatest need of policy attention. We extracted cluster-sample data on the use of the water supply and sanitation that included 10,826 surveyed households from the 2011 Nepal Demographic and Health Survey, then used a Gaussian kernel density estimation with adaptive bandwidths to estimate the distribution of access to improved WSS conditions over a grid at 1 × 1 km. The Gini coefficient was calculated for the measurement of inequality in the distribution of improved WSS; the Theil L measure and Theil T index were applied to account for the decomposition of inequality. 57% of Nepalese had access to improved sanitation (range: 18.1% in Mahottari to 100% in Kathmandu) and 92% to drinking-water (range: 41.7% in Doti to 100% in Bara). The most unequal districts in Gini coefficient among improved sanitation were Saptari, Sindhuli, Banke, Bajura and Achham (range: 0.276 to 0.316); and Sankhuwasabha, Arghakhanchi, Gulmi, Bhojpur, Kathmandu (range: 0.110 to 0.137) among improved drinking-water. Both the Theil L and Theil T showed that within-province inequality was substantially greater than between-province inequality; while within-district inequality was less than between-district inequality. The inequality of several districts was

  4. Higher order methods for burnup calculations with Bateman solutions

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Aarnio, P.A.

    2011-01-01

    Highlights: → Average microscopic reaction rates need to be estimated at each step. → Traditional predictor-corrector methods use zeroth and first order predictions. → Increasing predictor order greatly improves results. → Increasing corrector order does not improve results. - Abstract: A group of methods for burnup calculations solves the changes in material compositions by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates. This requires predicting representative averages for the one-group cross-sections and flux during each step, which is usually done using zeroth and first order predictions for their time development in a predictor-corrector calculation. In this paper we present the results of using linear, rather than constant, extrapolation on the predictor and quadratic, rather than linear, interpolation on the corrector. Both of these are done by using data from the previous step, and thus do not affect the stepwise running time. The methods were tested by implementing them into the reactor physics code Serpent and comparing the results from four test cases to accurate reference results obtained with very short steps. Linear extrapolation greatly improved results for thermal spectra and should be preferred over the constant one currently used in all Bateman solution based burnup calculations. The effects of using quadratic interpolation on the corrector were, on the other hand, predominantly negative, although not enough so to conclusively decide between the linear and quadratic variants.

  5. Calculating the Fee-Based Services of Library Institutions: Theoretical Foundations and Practical Challenges

    Directory of Open Access Journals (Sweden)

    Sysіuk Svitlana V.

    2017-05-01

    Full Text Available The article is aimed at highlighting features of the provision of the fee-based services by library institutions, identifying problems related to the legal and regulatory framework for their calculation, and the methods to implement this. The objective of the study is to develop recommendations to improve the calculation of the fee-based library services. The theoretical foundations have been systematized, the need to develop a Provision for the procedure of the fee-based services by library institutions has been substantiated. Such a Provision would protect library institution from errors in fixing the fee for a paid service and would be an informational source of its explicability. The appropriateness of applying the market pricing law based on demand and supply has been substantiated. The development and improvement of accounting and calculation, taking into consideration both industry-specific and market-based conditions, would optimize the costs and revenues generated by the provision of the fee-based services. In addition, the complex combination of calculation leverages with development of the system of internal accounting together with use of its methodology – provides another equally efficient way of improving the efficiency of library institutions’ activity.

  6. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Y.; Abdel-Khalik, H. S. [North Carolina State University, Raleigh, NC (United States); Jessee, M. A.; Mertyurek, U. [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2013-07-01

    While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)

  7. New resonance cross section calculational algorithms

    International Nuclear Information System (INIS)

    Mathews, D.R.

    1978-01-01

    Improved resonance cross section calculational algorithms were developed and tested for inclusion in a fast reactor version of the MICROX code. The resonance energy portion of the MICROX code solves the neutron slowing-down equations for a two-region lattice cell on a very detailed energy grid (about 14,500 energies). In the MICROX algorithms, the exact P 0 elastic scattering kernels are replaced by synthetic (approximate) elastic scattering kernels which permit the use of an efficient and numerically stable recursion relation solution of the slowing-down equation. In the work described here, the MICROX algorithms were modified as follows: an additional delta function term was included in the P 0 synthetic scattering kernel. The additional delta function term allows one more moments of the exact elastic scattering kernel to be preserved without much extra computational effort. With the improved synthetic scattering kernel, the flux returns more closely to the exact flux below a resonance than with the original MICROX kernel. The slowing-down calculation was extended to a true B 1 hyperfine energy grid calculatn in each region by using P 1 synthetic scattering kernels and tranport-corrected P 0 collision probabilities to couple the two regions. 1 figure, 6 tables

  8. Quality assurance of PTS thermal hydraulic calculations at BNL

    International Nuclear Information System (INIS)

    Rohatgi, U.S.; Pu, J.; Jo, J.; Saha, P.

    1983-01-01

    Rapid cooling of the reactor pressure vessel at high pressure has a potential of challenging the vessel integrity. This phenomenon is called overcooling or Pressurized Thermal Shock (PTS). The Nuclear Regulatory Commission (NRC) has selected three plants representing three types of PWRs in use for detailed PTS study. Oconee-1 (B and W), Calvert Cliffs (C.E.), and H.B. Robinson (Westinghouse). The Brookhaven National Laboratory (BNL) has been requested by NRC to review and compare the input decks developed at LANL and INEL, and to compare and explain the differences between the common calculations performed at these two laboratories. However, for the transients that will be computed by only one laboratory, a consistency check will be performed. So far only Oconee-1 calculations have been reviewed at BNL, and the results are presented here

  9. Develoment of pressure drop calculation modules for a wire-wrapped LMR subassembly

    International Nuclear Information System (INIS)

    Kim, Young Gyun; Lim, Hyun Jin; Kim, Won Seok; Kim, Young Il

    2000-06-01

    Pressure drop calculation modules for a wire-wrapped LMR subassembly was been developed. This report summarizes present information on pressure drop calculation modules for inlet hole, lower part and upper part of a wire-wrapped LMR subassembly which was developed using simple formulas of sudden expansion and sudden contraction. A case calculation study was done using design data of a KALIMER driver fuel subassembly. And the total pressure drop in the driver fuel subassembly, except for the bundle part, was calculated as 0.13 MPa, which is in the reasonable pressure drop range. The developed modules will be integrated in the total subassembly pressure drop calculation code with further improvements

  10. Traffic safety information in South Africa : how to improve the National Accident Register. Submitted to the National Department of Transport, Republic of South Africa and the Ministry of Transport, Public Works and Water Management, The Netherlands.

    NARCIS (Netherlands)

    Sluis, J. van der (ed.)

    2001-01-01

    This report describes a project that was carried out to investigate ways and means to improve the problems experienced with the South African National Accident Register (NAR) system, and to determine a long term strategy on road safety information in South Africa. Within the framework of the Road

  11. Estimating global, regional and national rotavirus deaths in children aged <5 years: Current approaches, new analyses and proposed improvements.

    Directory of Open Access Journals (Sweden)

    Andrew Clark

    Full Text Available Rotavirus is a leading cause of diarrhoeal mortality in children but there is considerable disagreement about how many deaths occur each year.We compared CHERG, GBD and WHO/CDC estimates of age under 5 years (U5 rotavirus deaths at the global, regional and national level using a standard year (2013 and standard list of 186 countries. The global estimates were 157,398 (CHERG, 122,322 (GBD and 215,757 (WHO/CDC. The three groups used different methods: (i to select data points for rotavirus-positive proportions; (ii to extrapolate data points to individual countries; (iii to account for rotavirus vaccine coverage; (iv to convert rotavirus-positive proportions to rotavirus attributable fractions; and (v to calculate uncertainty ranges. We conducted new analyses to inform future estimates. We found that acute watery diarrhoea was associated with 87% (95% CI 83-90% of U5 diarrhoea hospitalisations based on data from 84 hospital sites in 9 countries, and 65% (95% CI 57-74% of U5 diarrhoea deaths based on verbal autopsy reports from 9 country sites. We reanalysed data from the Global Enteric Multicenter Study (GEMS and found 44% (55% in Asia, and 32% in Africa rotavirus-positivity among U5 acute watery diarrhoea hospitalisations, and 28% rotavirus-positivity among U5 acute watery diarrhoea deaths. 97% (95% CI 95-98% of the U5 diarrhoea hospitalisations that tested positive for rotavirus were entirely attributable to rotavirus. For all clinical syndromes combined the rotavirus attributable fraction was 34% (95% CI 31-36%. This increased by a factor of 1.08 (95% CI 1.02-1.14 when the GEMS results were reanalysed using a more sensitive molecular test.We developed consensus on seven proposals for improving the quality and transparency of future rotavirus mortality estimates.

  12. Implementation of an Integrated Approach to the National HIV/AIDS Strategy for Improving Human Immunodeficiency Virus Care for Youths.

    Science.gov (United States)

    Fortenberry, J Dennis; Koenig, Linda J; Kapogiannis, Bill G; Jeffries, Carrie L; Ellen, Jonathan M; Wilson, Craig M

    2017-07-01

    Youths aged 13 to 24 years old living with human immunodeficiency virus (HIV) are less likely than adults to receive the health and prevention benefits of HIV treatments, with only a small proportion having achieved sustained viral suppression. These age-related disparities in HIV continuum of care are owing in part to the unique developmental issues of adolescents and young adults as well as the complexity and fragmentation of HIV care and related services. This article summarizes a national, multiagency, and multilevel approach to HIV care for newly diagnosed youths designed to bridge some of these fragmentations by addressing National HIV/AIDS Strategy goals for people living with HIV. Three federal agencies developed memoranda of understanding to sequentially implement 3 protocols addressing key National HIV/AIDS Strategy goals. The goals were addressed in the Adolescent Trials Network, with protocols implemented in 12 to 15 sites across the United States. Outcome data were collected from recently diagnosed youth referred to the program. Cross-agency collaboration, youth-friendly linkage to care services, community mobilization to address structural barriers to care, cooperation among services, proportion of all men who have sex with men who tested, and rates of linkage to prevention services. The program addressed National HIV/AIDS Strategy goals 2 through 4 including steps within each goal. A total of 3986 HIV-positive youths were referred for care, with more than 75% linked to care within 6 weeks of referral, with almost 90% of those youths engaged in subsequent HIV care. Community mobilization efforts implemented and completed structural change objectives to address local barriers to care. Age and racial/ethnic group disparities were addressed through targeted training for culturally competent, youth-friendly care, and intensive motivational interviewing training. A national program to address the National HIV/AIDS Strategy specifically for youths can

  13. Using Network Centrality Measures to Improve National Journal Classification Lists

    DEFF Research Database (Denmark)

    Zuccala, Alesia Ann; Robinson-Garcia, Nicolas; Repiso, Rafael

    2017-01-01

    In countries like Denmark and Spain classified journal lists are now being produced and used in the calculation of nationwide performance indicators. As a result, Danish and Spanish scholars are advised to contribute to journals of high 'authority' (as in the former) or those within a high class...

  14. MPACT Subgroup Self-Shielding Efficiency Improvements

    International Nuclear Information System (INIS)

    Stimpson, Shane; Liu, Yuxuan; Collins, Benjamin S.; Clarno, Kevin T.

    2016-01-01

    Recent developments to improve the efficiency of the MOC solvers in MPACT have yielded effective kernels that loop over several energy groups at once, rather that looping over one group at a time. These kernels have produced roughly a 2x speedup on the MOC sweeping time during eigenvalue calculation. However, the self-shielding subgroup calculation had not been reevaluated to take advantage of these new kernels, which typically requires substantial solve time. The improvements covered in this report start by integrating the multigroup kernel concepts into the subgroup calculation, which are then used as the basis for further extensions. The next improvement that is covered is what is currently being termed as ''Lumped Parameter MOC''. Because the subgroup calculation is a purely fixed source problem and multiple sweeps are performed only to update the boundary angular fluxes, the sweep procedure can be condensed to allow for the instantaneous propagation of the flux across a spatial domain, without the need to sweep along all segments in a ray. Once the boundary angular fluxes are considered to be converged, an additional sweep that will tally the scalar flux is completed. The last improvement that is investigated is the possible reduction of the number of azimuthal angles per octant in the shielding sweep. Typically 16 azimuthal angles per octant are used for self-shielding and eigenvalue calculations, but it is possible that the self-shielding sweeps are less sensitive to the number of angles than the full eigenvalue calculation.

  15. Detriment calculations resulting from occupational radiation exposures in Egypt

    International Nuclear Information System (INIS)

    Abdel-Ghani, A.H.

    2000-01-01

    The application of the nominal probability coefficient to evaluate the detriment after the annual occupational exposures of workers from radiation sources and radioactive material have been calculated for workers in medical practices, industrial applications, atomic energy activities and those involved in exploration and mining of radioactive ores and phosphates. The aim of detriment calculations is to provide a foresight for the future occurrence of stochastic effects among the exposed workers. The calculated detriment can be classified into three classes. The first includes workers in diagnostic radiology and atomic energy activities who received the higher doses and consequently represent the higher detriment. The second class comprises workers in radiotherapy and nuclear medicine whose detriment is for times lesser than that of the first class. The third one concerns workers in industrial applications and in exploration and mining of radioactive ores and phosphates, their detriments ten times lesser than that of the second class. The occupational radiation doses are endorsed by the united nation scientific committee on efects of atomic radiation (UNSCEAR) for the period january 1995 to december 1998

  16. Quality Improvement and Performance Management Benefits of Public Health Accreditation: National Evaluation Findings.

    Science.gov (United States)

    Siegfried, Alexa; Heffernan, Megan; Kennedy, Mallory; Meit, Michael

    To identify the quality improvement (QI) and performance management benefits reported by public health departments as a result of participating in the national, voluntary program for public health accreditation implemented by the Public Health Accreditation Board (PHAB). We gathered quantitative data via Web-based surveys of all applicant and accredited public health departments when they completed 3 different milestones in the PHAB accreditation process. Leadership from 324 unique state, local, and tribal public health departments in the United States. Public health departments that have achieved PHAB accreditation reported the following QI and performance management benefits: improved awareness and focus on QI efforts; increased QI training among staff; perceived increases in QI knowledge among staff; implemented new QI strategies; implemented strategies to evaluate effectiveness and quality; used information from QI processes to inform decision making; and perceived achievement of a QI culture. The reported implementation of QI strategies and use of information from QI processes to inform decision making was greater among recently accredited health departments than among health departments that had registered their intent to apply but not yet undergone the PHAB accreditation process. Respondents from health departments that had been accredited for 1 year reported higher levels of staff QI training and perceived increases in QI knowledge than those that were recently accredited. PHAB accreditation has stimulated QI and performance management activities within public health departments. Health departments that pursue PHAB accreditation are likely to report immediate increases in QI and performance management activities as a result of undergoing the PHAB accreditation process, and these benefits are likely to be reported at a higher level, even 1 year after the accreditation decision.

  17. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  18. Improvement of Modeling Scheme of the Safety Injection Tank with Fluidic Device for Realistic LBLOCA Calculation

    International Nuclear Information System (INIS)

    Bang, Young Seok; Cheong, Aeju; Woo, Sweng Woong

    2014-01-01

    Confirmation of the performance of the SIT with FD should be based on thermal-hydraulic analysis of LBLOCA and an adequate and physical model simulating the SIT/FD should be used in the LBLOCA calculation. To develop such a physical model on SIT/FD, simulation of the major phenomena including flow distribution of by standpipe and FD should be justified by full scale experiment and/or plant preoperational testing. Author's previous study indicated that an approximation of SIT/FD phenomena could be obtained by a typical system transient code, MARS-KS, and using 'accumulator' component model, however, that additional improvement on modeling scheme of the FD and standpipe flow paths was needed for a reasonable prediction. One problem was a depressurizing behavior after switchover to low flow injection phase. Also a potential to release of nitrogen gas from the SIT to the downstream pipe and then reactor core through flow paths of FD and standpipe has been concerned. The intrusion of noncondensible gas may have an effect on LBLOCA thermal response. Therefore, a more reliable model on SIT/FD has been requested to get a more accurate prediction and a confidence of the evaluation of LBLOCA. The present paper is to discuss an improvement of modeling scheme from the previous study. Compared to the existing modeling, effect of the present modeling scheme on LBLOCA cladding thermal response is discussed. The present study discussed the modeling scheme of SIT with FD for a realistic simulation of LBLOCA of APR1400. Currently, the SIT blowdown test can be best simulated by the modeling scheme using 'pipe' component with dynamic area reduction. The LBLOCA analysis adopting the modeling scheme showed the PCT increase of 23K when compared to the case of 'accumulator' component model, which was due to the flow rate decrease at transition phase low flow injection and intrusion of nitrogen gas to the core. Accordingly, the effect of SIT/FD modeling

  19. The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.

    Science.gov (United States)

    Youn, Ahrim; Wang, Shuang

    2018-01-01

    Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .

  20. Developing a Service Improvement System for the National Dutch Railways

    NARCIS (Netherlands)

    Verhoef, Peter C.; Heijnsbroek, Martin; Bosma, Joost

    2017-01-01

    Customer satisfaction is essential for public and railway services, because firms in these industries have contracts with governments requiring them to achieve specific customer satisfaction targets. In this paper, we describe a National Dutch Railways project in which we identify the major