WorldWideScience

Sample records for benchmark dose method

  1. Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1

    International Nuclear Information System (INIS)

    Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addresses many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.

  2. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  3. Dose Rate Experiment at JET for Benchmarking the Calculation Direct One Step Method

    International Nuclear Information System (INIS)

    Neutrons produced by D-D and D-T plasmas induce the activation of tokamak materials and of components. The development of reliable methods to assess dose rates is a key issue for maintenance and operating nuclear machines, in normal and off-normal conditions. In the frame of the EFDA Fusion Technology work programme, a computational tool based upon MCNP Monte Carlo code has been developed to predict the dose rate after shutdown: it is called Direct One Step Method (D1S). The D1S is an innovative approach in which the decay gammas are coupled to the neutrons as in the prompt case and they are transported in one single step in the same run. Benchmarking of this new tool with experimental data taken in a complex geometry like that of a tokamak is a fundamental step to test the reliability of the D1S method. A dedicated benchmark experiment was proposed for the 2005-2006 experimental campaign of JET. Two irradiation positions have been selected for the benchmark: one inner position inside the vessel, not far from the plasma, called the 2 upper irradiation end (IE2), where neutron fluence is relatively high. The second position is just outside a vertical port in an external position (EX). Here the neutron flux is lower and the dose rate to be measured is not very far from the residual background. Passive detectors are used for in-vessel measurements: the high sensitivity Thermo Luminescent Dosimeters (TLDs) GR-200A (natural LiF), which ensure measurements down to environmental dose level. An active detector of Geiger-Muller (GM) type is used for out of vessel dose rate measurement. Before their use the detectors were calibrated in a secondary gamma-ray standard (Cs-137 and Co-60) facility in term of air-kerma. The background measurement was carried-out in the period July -September 2005 in the outside position EX using the GM tube and in September 2005 inside the vacuum vessel using TLD detectors located in the 2 Upper irradiation end IE2. In the present work

  4. Improvement and benchmarking of the new shutdown dose estimation method by Monte Carlo code

    International Nuclear Information System (INIS)

    In the ITER (international thermonuclear experimental reactor,) project calculations of the dose rate after shutdown are very important and their results are critical for the machine design. A new method has been proposed which makes use of MCNP also for decay gamma-ray transport calculations. The objective is to have an easy tool giving results affected by low uncertainty due to the modeling or simplifications in the flux shape assumptions. Further improvements to this method are here presented. This methodology has been developed, in the ITER frame, in a limited case in which the radioactivity comes only from the Vacuum Vessel (made of stainless steel) till a time around few days after ITER shutdown. Further improvement is required to make it applicable to more general cases (at different times and/or with different materials). Some benchmark results are shown. Discrepancies between the different methods are due mainly to the different cross section used. Agreement with available ad hoc experiment is very good. (orig.)

  5. The reference dose for subchronic exposure of pigs to cadmium leading to early renal damage by benchmark dose method.

    Science.gov (United States)

    Wu, Xiaosheng; Wei, Shuai; Wei, Yimin; Guo, Boli; Yang, Mingqi; Zhao, Duoyong; Liu, Xiaoling; Cai, Xianfeng

    2012-08-01

    Pigs were exposed to cadmium (Cd) (in the form of CdCl(2)) concentrations ranging from 0 to 32mg Cd/kg feed for 100 days. Urinary cadmium (U-Cd) and blood cadmium (B-Cd) levels were determined as indicators of Cd exposure. Urinary levels of β(2)-microglobulin (β(2)-MG), α(1)-microglobulin (α(1)-MG), N-acetyl-β-D-glucosaminidase (NAG), cadmium-metallothionein (Cd-MT), and retinol binding protein (RBP) were determined as biomarkers of tubular dysfunction. U-Cd concentrations were increased linearly with time and dose, whereas B-Cd reached two peaks at 40 days and 100 days in the group exposed to 32mg Cd/kg. Hyper-metallothionein-urinary (HyperMTuria) and hyper-N-acetyl-β-D-glucosaminidase-urinary (hyperNAGuria) emerged from 80 days onwards in the group exposed to 32mg Cd/kg feed, followed by hyper-β2-microglobulin-urinary (hyperβ2-MGuria) and hyper-retinol-binding-protein-urinary (hyperRBPuria) from 100 days onwards. The relationships between the Cd exposure dose and biomarkers of exposure (as well as the biomarkers of effect) were examined, and significant correlations were found between them (except for α(1)-MG). Dose-response relationships between Cd exposure dose and biomarkers of tubular dysfunction were studied. The critical concentration of Cd exposure dose was calculated by the benchmark dose (BMD) method. The BMD(10)/BMDL(10) was estimated to be 1.34/0.67, 1.21/0.88, 2.75/1.00, and 3.73/3.08mg Cd/kg feed based on urinary RBP, NAG, Cd-MT, and β(2)-MG, respectively. The calculated tolerable weekly intake of Cd for humans was 1.4 μg/kg body weight based on a safety factor of 100. This value is lower than the currently available values set by several different countries. This indicates a need for further studies on the effects of Cd and a re-evaluation of the human health risk assessment for the metal. PMID:22610606

  6. Entropy-based benchmarking methods

    OpenAIRE

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  7. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    , then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study......In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored...

  8. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  9. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  10. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  11. TH-E-BRE-01: A 3D Solver of Linear Boltzmann Transport Equation Based On a New Angular Discretization Method with Positivity for Photon Dose Calculation Benchmarked with Geant4

    International Nuclear Information System (INIS)

    Purpose: The Linear Boltzmann Transport Equation (LBTE) solved through statistical Monte Carlo (MC) method provides the accurate dose calculation in radiotherapy. This work is to investigate the alternative way for accurately solving LBTE using deterministic numerical method due to its possible advantage in computational speed from MC. Methods: Instead of using traditional spherical harmonics to approximate angular scattering kernel, our deterministic numerical method directly computes angular scattering weights, based on a new angular discretization method that utilizes linear finite element method on the local triangulation of unit angular sphere. As a Result, our angular discretization method has the unique advantage in positivity, i.e., to maintain all scattering weights nonnegative all the time, which is physically correct. Moreover, our method is local in angular space, and therefore handles the anisotropic scattering well, such as the forward-peaking scattering. To be compatible with image-guided radiotherapy, the spatial variables are discretized on the structured grid with the standard diamond scheme. After discretization, the improved sourceiteration method is utilized for solving the linear system without saving the linear system to memory. The accuracy of our 3D solver is validated using analytic solutions and benchmarked with Geant4, a popular MC solver. Results: The differences between Geant4 solutions and our solutions were less than 1.5% for various testing cases that mimic the practical cases. More details are available in the supporting document. Conclusion: We have developed a 3D LBTE solver based on a new angular discretization method that guarantees the positivity of scattering weights for physical correctness, and it has been benchmarked with Geant4 for photon dose calculation

  12. On the Extrapolation with the Denton Proportional Benchmarking Method

    OpenAIRE

    Marco Marini; Tommaso Di Fonzo

    2012-01-01

    Statistical offices have often recourse to benchmarking methods for compiling quarterly national accounts (QNA). Benchmarking methods employ quarterly indicator series (i) to distribute annual, more reliable series of national accounts and (ii) to extrapolate the most recent quarters not yet covered by annual benchmarks. The Proportional First Differences (PFD) benchmarking method proposed by Denton (1971) is a widely used solution for distribution, but in extrapolation it may suffer when the...

  13. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    Science.gov (United States)

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  14. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  15. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  16. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248. ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 3.026, year: 2014 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  17. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  18. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  19. Measurement Methods in the field of benchmarking

    Directory of Open Access Journals (Sweden)

    István Szűts

    2004-05-01

    Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.

  20. Benchmark calculations of neutron dose rates at transport and storage casks

    International Nuclear Information System (INIS)

    The application of numerical calculations methods for demonstration of sufficient radiation shielding of radioactive waste transport and storage casks requires a validation based on appropriate measurements of gamma and neutron sources. The results of the comparison of measured data and calculations using the Monte Carlo program MCNP show deviations dependent on the loading of the cask within the standard deviation which is dominated by the measuring method. Considering the neutrons scattered at the salt MCNP (in case of disposal in the salt) tends to underestimate the nominal values, but still within the double standard deviation. This accuracy is not reached with MAVRIC. Based on AHE (active handling experiments) data benchmark calculations were performed that can be used as reference value. The total accuracy results from the accuracy of the source term and the measurement of the neutron dose rate with a deviation of 15%.

  1. A heterogeneous analytical benchmark for particle transport methods development

    International Nuclear Information System (INIS)

    A heterogeneous analytical benchmark has been designed to provide a quality control measure for large-scale neutral particle computational software. Assurance that particle transport methods are efficiently implemented and that current codes are adequately maintained for reactor and weapons applications is a major task facing today's transport code developers. An analytical benchmark, as used here, refers to a highly accurate evaluation of an analytical solution to the neutral particle transport equation. Because of the requirement of an analytical solution, however, only relatively limited transport scenarios can be treated. To some this may seem to be a major disadvantage of analytical benchmarks. However, to the code developer, simplicity by no means diminishes the usefulness of these benchmarks since comprehensive transport codes must perform adequately for simple as well as comprehensive transport scenarios

  2. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  3. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Joergensen, Esben

    2013-01-01

    follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children with...

  4. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  5. Application of Benchmark Dose (BMD) in Estimating Biological Exposure Limit (BEL) to Cadmium

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Objective To estimate the biological exposure limit (BEL) using benchmark dose (BMD) based on two sets of data from occupational epidemiology. Methods Cadmium-exposed workers were selected from a cadmium smelting factory and a zinc product factory. Doctors, nurses or shop assistants living in the same area served as a control group. Urinary cadmium (UCd) was used as an exposure biomarker and urinary β2-microgloburin (B2M), N-acetyl-β-D-glucosaminidase (NAG) and albumin (ALB) as effect biomarkers. All urine parameters were adjusted by urinary creatinine. Software of BMDS (Version 1.3.2, EPA.U.S.A) was used to calculate BMD. Results The cut-off point (abnormal values) was determined based on the upper limit of 95% of effect biomarkers in control group. There was a significant dose response relationship between the effect biomarkers (urinary B2M, NAG, and ALB) and exposure biomarker (UCd). BEL value was 5 μg/g creatinine for UB2M as an effect biomarker, consistent with the recommendation of WHO. BEL could be estimated by using the method of BMD. BEL value was 3 μg/g creatinine for UNAG as an effect biomarker. The more sensitive the used biomarker is, the more occupational population will be protected. Conclusion BMD can be used in estimating the biological exposure limit (BEL). UNAG is a sensitive biomarker for estimating BEL after cadmium exposure.

  6. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  7. Benchmark of GW methods for azabenzenes

    OpenAIRE

    Marom, Noa; Caruso, Fabio; Ren, Xinguo; Rubio Secades, Ángel; Scheffler, Matthias; Rinke, Patrick

    2012-01-01

    Many-body perturbation theory in the GW approximation is a useful method for describing electronic properties associated with charged excitations. A hierarchy of GW methods exists, starting from non-self-consistent G0W0, through partial self-consistency in the eigenvalues (ev-scGW) and in the Green function (scGW0), to fully self-consistent GW (scGW). Here, we assess the performance of these methods for benzene, pyridine, and the diazines. The quasiparticle spectra are compared to photoemissi...

  8. BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Brotherton, Kevin

    2009-04-30

    The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.

  9. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana;

    2014-01-01

    One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is--that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology......; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species...

  10. Generic Hockey-Stick Model for Estimating Benchmark Dose and Potency: Performance Relative to BMDS and Application to Anthraquinone

    OpenAIRE

    Kenneth T. Bogen

    2010-01-01

    Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses (“hormesis”). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characteri...

  11. Modeling the emetic potencies of food-borne trichothecenes by benchmark dose methodology.

    Science.gov (United States)

    Male, Denis; Wu, Wenda; Mitchell, Nicole J; Bursian, Steven; Pestka, James J; Wu, Felicia

    2016-08-01

    Trichothecene mycotoxins commonly co-contaminate cereal products. They cause immunosuppression, anorexia, and emesis in multiple species. Dietary exposure to such toxins often occurs in mixtures. Hence, if it were possible to determine their relative toxicities and assign toxic equivalency factors (TEFs) to each trichothecene, risk management and regulation of these mycotoxins could become more comprehensive and simple. We used a mink emesis model to compare the toxicities of deoxynivalenol, 3-acetyldeoxynivalenol, 15-acetyldeoxynivalenol, nivalenol, fusarenon-X, HT-2 toxin, and T-2 toxin. These toxins were administered to mink via gavage and intraperitoneal injection. The United States Environmental Protection Agency (EPA) benchmark dose software was used to determine benchmark doses for each trichothecene. The relative potencies of each of these toxins were calculated as the ratios of their benchmark doses to that of DON. Our results showed that mink were more sensitive to orally administered toxins than to toxins administered by IP. T-2 and HT-2 toxins caused the greatest emetic responses, followed by FX, and then by DON, its acetylated derivatives, and NIV. Although these results provide key information on comparative toxicities, there is still a need for more animal based studies focusing on various endpoints and combined effects of trichothecenes before TEFs can be established. PMID:27292944

  12. Benchmarking Methods in the Regulation of Electricity Distribution System Operators

    OpenAIRE

    Janda, Karel; Krska, Stepan

    2014-01-01

    This paper examines the regulation of distribution system operators (DSOs) focused on the Czech electricity market. It presents an international benchmarking study based on data of 15 regional DSOs including two Czech operators. The study examines the application of yardstick methods using data envelopment analysis (DEA) and stochastic frontier analysis (SFA). We find that the cost efficiency of each of the Czech DSOs is different, which indicates a suitability of introduction of individual e...

  13. An international pooled analysis for obtaining a benchmark dose for environmental lead exposure in children

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Bellinger, David; Lanphear, Bruce; Grandjean, Philippe

    2013-01-01

    Lead is a recognized neurotoxicant, but estimating effects at the lowest measurable levels is difficult. An international pooled analysis of data from seven cohort studies reported an inverse and supra-linear relationship between blood lead concentrations and IQ scores in children. The lack of a...... clear threshold presents a challenge to the identification of an acceptable level of exposure. The benchmark dose (BMD) is defined as the dose that leads to a specific known loss. As an alternative to elusive thresholds, the BMD is being used increasingly by regulatory authorities. Using the pooled data...... fitting models yielding lower confidence limits (BMDLs) of about 0.1-1.0 μ g/dL for the dose leading to a loss of one IQ point. We conclude that current allowable blood lead concentrations need to be lowered and further prevention efforts are needed to protect children from lead toxicity....

  14. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark

    International Nuclear Information System (INIS)

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  15. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark

    Science.gov (United States)

    Renner, F.; Wulff, J.; Kapsch, R.-P.; Zink, K.

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  16. An adaptive nonparametric method in benchmark analysis for bioassay and environmental studies

    OpenAIRE

    Bhattacharya, Rabi; Lin, Lizhen

    2010-01-01

    We present a novel nonparametric method for bioassay and benchmark analysis in risk assessment, which averages isotonic MLEs based on disjoint subgroups of dosages. The asymptotic theory for the methodology is derived, showing that the MISEs (mean integrated squared error) of the estimates of both the dose-response curve F and its inverse F−1 achieve the optimal rate O(N−4/5). Also, we compute the asymptotic distribution of the estimate ζ~p of the effective dosage ζp = F−1 (p) which is shown ...

  17. Methodical Fundamentals of Consumer Cooperatives Trade Enterprises Benchmarking Management

    OpenAIRE

    Dvirko Yuriy V.

    2012-01-01

    The article deals with organizational and methodical fundamentals of Ukrainian consumer cooperatives trade enterprises benchmarking management. There was offered the author’s approach upon essence, objects, aims, principals, tasks and benchmarking management models under conditions of modern business.В статье исследованы организационно-методические основы управления бенчмаркингом торговых предприятий потребительской кооперации Украины. Предложено авторское видение сущности, объектов, целей, п...

  18. Netherlands contribution to the EC project: Benchmark exercise on dose estimation in a regulatory context

    International Nuclear Information System (INIS)

    On request of the Netherlands government FEL-TNO is developing a decision support system with the acronym RAMBOS for the assessment of the off-site consequences of an accident with hazardous materials. This is a user friendly interactive computer program, which uses very sophisticated graphical means. RAMBOS supports the emergency planning organization in two ways. Firstly, the risk to the residents in the surroundings of the accident is quantified in terms of severity and magnitude (number of casualties, etc.). Secondly, the consequences of countermeasures, such as sheltering and evacuation, are predicted. By evaluating several countermeasures the user can determine an optimum policy to reduce the impact of the accident. Within the framework of the EC project 'Benchmark exercise on dose estimation in a regulatory context' on request of the Ministry of Housing, Physical Planning and Environment calculations were carried out with the RAMBOS system. This report contains the results of these calculations. 3 refs.; 2 figs.; 10 tabs

  19. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    International Nuclear Information System (INIS)

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks

  20. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, K [Graduate School of Medicine, Nagoya University, Nagoya, JP (Japan); UCLA School of Medicine, Los Angeles, CA (United States); Bostani, M; Cagnon, C; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.

  1. Calculations of EURACOS iron benchmark experiment using the HYBRID method

    International Nuclear Information System (INIS)

    In this paper, the HYBRID method is used in the calculations of the iron benchmark experiment at the EURACOS-II device. The saturation activities of the 32S(n,p)32P reaction at different depths in an iron block are computed with ENDF/B-IV data to compare with the measurements. At the outer layers of the iron block, the HYBRID calculation gives increasingly higher results than the VITAMIN-C multigroup calculation. With the adjustment of the two- to one-dimensional ratios, the HYBRID results agree with the measurements to within 10% at most penetration depths, a considerable improvement over the VITAMIN-C multigroup results. The development of a collapsing method for the HYBRID cross sections provides a more direct and practical way of using the HYBRID method in the two-dimensional calculations. It is observed that half of the window effect is smeared in the collapsing treatment, but it still provides a better cross-section set than the VITAMIN-C cross sections for the deep-penetration calculations

  2. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    OpenAIRE

    Puton, T.; Kozlowski, L. P.; Rother, K. M.; Bujnicki, J. M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative perfor...

  3. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  4. Comparison of Benchmarking Methods with and without a Survey Error Model

    OpenAIRE

    Chen, Zhao-Guo; Ho Wu, Ka

    2006-01-01

    For a target socio-economic variable, two sources of data with different precisions and collecting frequencies may be available. Typically, the less frequent data (e.g., annual report or census) are more reliable and are considered as benchmarks. The process of using them to adjust the more frequent and less reliable data (e.g., repeated monthly surveys) is called benchmarking. ¶ In this paper, we show the relationship among three types of benchmarking methods in the literature, namely the De...

  5. A design of benchmarking method for assessing performance of e-Government systems

    OpenAIRE

    Mushi, Cleopa John

    2008-01-01

    This paper is an initial work towards developing an e-Government benchmarking model that is user-centric. To achieve the goal then, public service delivery is discussed first including the transition to online public service delivery and the need for providing public services using electronic media. Two major e-Government benchmarking methods are critically discussed and the need to develop a standardized benchmarking model that is user-centric is presented. To properly articulate user requir...

  6. Piloting a Process Maturity Model as an e-Learning Benchmarking Method

    Science.gov (United States)

    Petch, Jim; Calverley, Gayle; Dexter, Hilary; Cappelli, Tim

    2007-01-01

    As part of a national e-learning benchmarking initiative of the UK Higher Education Academy, the University of Manchester is carrying out a pilot study of a method to benchmark e-learning in an institution. The pilot was designed to evaluate the operational viability of a method based on the e-Learning Maturity Model developed at the University of…

  7. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    OpenAIRE

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artif...

  8. Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.

  9. Three anisotropic benchmark problems for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Čertík, O.; Korous, L.

    2013-01-01

    Roč. 219, č. 13 (2013), s. 7286-7295. ISSN 0096-3003 R&D Projects: GA AV ČR IAA100760702 Institutional support: RVO:61388998 Keywords : benchmark problem * anisotropic solution * boundary layer Subject RIV: BA - General Mathematics Impact factor: 1.600, year: 2013

  10. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  11. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    Directory of Open Access Journals (Sweden)

    Yu.V. Dvirko

    2013-03-01

    Full Text Available The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalization big companies, enterprises, organizations realize the necessity of the thorough and profound research of the best achievements of market subjects relations with their further using in their own activity. Benchmarking is the process of competitive advantages borrowing and competitiveness increasing of Consumer Cooperatives trade enterprises at the expense of research leaning and adapting the best methods of business processes realization with the purpose to increase their functioning affectivity and best satisfaction of societal needs.The main goals of benchmarking using in Consumer Cooperatives are the following: increasing of needs satisfaction level at the expense of products quality increasing, transportation goods term shortening, service quality increasing; enterprise potential strengthening, competitiveness strengthening, image improvement; generation and new ideas and innovative decisions implementation in trade enterprise activity. The advantages of benchmarking using in Consumer Cooperatives trade enterprises activity are the following: adapting the parameters of enterprise functioning to market demands; gradual defining and removing inadequacies which obstacle enterprise development; borrowing the best methods of further enterprise development; competitive advantages gaining; technological innovations; employees motivation. Authors classification of benchmarking is represented by the following components: by cycle durability strategic, operative

  12. Benchmark Experiment of Dose Rate Distributions Around the Gamma Knife Medical Apparatus

    International Nuclear Information System (INIS)

    Dose rate measurements around a gamma knife apparatus were performed by using an ionization chamber. Analyses have been performed by using the Monte Carlo code MCNP-5. The nuclear library used for the dose rate distribution of 60Co was MCPLIB04. The calculation model was prepared with a high degree of fidelity, such as the position of each Cobalt source and shielding materials. Comparisons between measured results and calculated ones were performed, and a very good agreement was observed. It is concluded that the Monte Carlo calculation method with its related nuclear data library is very effective for such a complicated radiation oncology apparatus

  13. Benchmark Experiment of Dose Rate Distributions Around the Gamma Knife Medical Apparatus

    Science.gov (United States)

    Oishi, K.; Kosako, K.; Kobayashi, Y.; Sonoki, I.

    2014-06-01

    Dose rate measurements around a gamma knife apparatus were performed by using an ionization chamber. Analyses have been performed by using the Monte Carlo code MCNP-5. The nuclear library used for the dose rate distribution of 60Co was MCPLIB04. The calculation model was prepared with a high degree of fidelity, such as the position of each Cobalt source and shielding materials. Comparisons between measured results and calculated ones were performed, and a very good agreement was observed. It is concluded that the Monte Carlo calculation method with its related nuclear data library is very effective for such a complicated radiation oncology apparatus.

  14. Benchmark Experiment of Dose Rate Distributions Around the Gamma Knife Medical Apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Oishi, K., E-mail: koji_oishi@shimz.co.jp [Institute of Technology, Shimizu Corporation, Tokyo (Japan); Kosako, K. [Institute of Technology, Shimizu Corporation, Tokyo (Japan); Kobayashi, Y.; Sonoki, I. [Giken Kogyo Co., Ltd., Tokyo (Japan)

    2014-06-15

    Dose rate measurements around a gamma knife apparatus were performed by using an ionization chamber. Analyses have been performed by using the Monte Carlo code MCNP-5. The nuclear library used for the dose rate distribution of {sup 60}Co was MCPLIB04. The calculation model was prepared with a high degree of fidelity, such as the position of each Cobalt source and shielding materials. Comparisons between measured results and calculated ones were performed, and a very good agreement was observed. It is concluded that the Monte Carlo calculation method with its related nuclear data library is very effective for such a complicated radiation oncology apparatus.

  15. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    Science.gov (United States)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  16. Generic Hockey-Stick Model for Estimating Benchmark Dose and Potency: Performance Relative to BMDS and Application to Anthraquinone.

    Science.gov (United States)

    Bogen, Kenneth T

    2011-01-01

    Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses ("hormesis"). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characterizes and tests objectively for hormetic trend. Using 100 simulated dichotomous-data sets (5 dose groups, 50 animals/group), sampled from each of seven risk functions, GHS estimators performed about as well or better than BMDS estimators, and a surprising observation was that BMDS mis-specified all of six non-hormetic sampled risk functions most or all of the time. When applied to data on rodent tumors induced by the genotoxic chemical carcinogen anthraquinone (AQ), the GHS model yielded significantly negative estimates of net potency exhibited by the combined rodent data, suggesting that-consistent with the anti-leukemogenic properties of AQ and structurally similar quinones-environmental AQ exposures do not likely increase net cancer risk. In addition to its simplicity and flexibility, the GHS approach offers a unified, consistent approach to quantifying environmental chemical risk. PMID:21731536

  17. High integrity reference trajectory for benchmarking land navigation data fusion methods

    OpenAIRE

    Betaille, David; CHAPELON, Antoine; Lusetti, Benoît; KAIS, Mikaël; MILLESCAMPS, Damien

    2007-01-01

    AbstractIn the framework of a joint initiative of several French laboratories that investigate land navigation, the authors have designed an architecture and tests protocol for benchmarking altogether data fusion methods applied on a collection of sensors covering the complete range of quality. Special attention has been given to sensors data timestamping since the benchmarking is based on the comparison of computed trajectories with the reference trajectory, so called because its compu...

  18. Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Matson, Nance E.; Piette, Mary Ann

    2005-09-05

    This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.

  19. Framework for benchmarking online retailing performance using fuzzy AHP and TOPSIS method

    Directory of Open Access Journals (Sweden)

    M. Ahsan Akhtar Hasin

    2012-08-01

    Full Text Available Due to increasing penetration of internet connectivity, on-line retail is growing from the pioneer phase to increasing integration within people's lives and companies' normal business practices. In the increasingly competitive environment, on-line retail service providers require systematic and structured approach to have cutting edge over the rival. Thus, the use of benchmarking has become indispensable to accomplish superior performance to support the on-line retail service providers. This paper uses the fuzzy analytic hierarchy process (FAHP approach to support a generic on-line retail benchmarking process. Critical success factors for on-line retail service have been identified from a structured questionnaire and literature and prioritized using fuzzy AHP. Using these critical success factors, performance levels of the ORENET an on-line retail service provider is benchmarked along with four other on-line service providers using TOPSIS method. Based on the benchmark, their relative ranking has also been illustrated.

  20. Dose estimation by biological methods

    International Nuclear Information System (INIS)

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  1. Biological dosimetry - Dose estimation method using biomakers

    International Nuclear Information System (INIS)

    The individual radiation dose estimation is an important step in the radiation risk assessment. In case of radiation incident or radiation accident, sometime, physical dosimetry method can not be used for calculating the individual radiation dose, the other complement method such as biological dosimetry is very necessary. This method is based on the quantitative specific biomarkers induced by ionizing radiation, such as dicentric chromosomes, translocations, micronuclei... in human peripheral blood lymphocytes. The basis of the biological dosimetry method is the close relationship between the biomarkers and absorbed dose or dose rate; the effects of in vitro and in vivo are similar, so it is able to generate the calibration dose-effect curve in vitro for in vivo assessment. Possibilities and perspectives for performing biological dosimetry method in radiation protection area are presented in this report. (author)

  2. Benchmarking with the multigroup diffusion high-order response matrix method

    International Nuclear Information System (INIS)

    The benchmarking capabilities of the high-order response matrix eigenvalue method, which was developed more than a decade ago, are demonstrated by means of the numerical analysis of a variety of two-dimensional Cartesian geometry light-water reactor test problems. These problems are typical of those generally used for the benchmarking of coarse-mesh (nodal) diffusion methods and the numerical results show that the high-order response matrix eigenvalue method is well suited to be used as an alternative to fine-mesh finite-difference and refined mesh nodal methods for the purpose of generating reference solutions to such problems. (author)

  3. Simplified CCSD(T)-F12 methods: theory and benchmarks.

    Science.gov (United States)

    Knizia, Gerald; Adler, Thomas B; Werner, Hans-Joachim

    2009-02-01

    The simple and efficient CCSD(T)-F12x approximations (x = a,b) we proposed in a recent communication [T. B. Adler, G. Knizia, and H.-J. Werner, J. Chem. Phys. 127, 221106 (2007)] are explained in more detail and extended to open-shell systems. Extensive benchmark calculations are presented, which demonstrate great improvements in basis set convergence for a wide variety of applications. These include reaction energies of both open- and closed-shell reactions, atomization energies, electron affinities, ionization potentials, equilibrium geometries, and harmonic vibrational frequencies. For all these quantities, results better than the AV5Z quality are obtained already with AVTZ basis sets, and usually AVDZ treatments reach at least the conventional AVQZ quality. For larger molecules, the additional cost for these improvements is only a few percent of the time for a standard CCSD(T) calculation. For the first time ever, total reaction energies with chemical accuracy are obtained using valence-double-zeta basis sets. PMID:19206955

  4. Methods of bone marrow dose calculation

    International Nuclear Information System (INIS)

    Several methods of bone marrow dose calculation for photon irradiation were analised. After a critical analysis, the author proposes the adoption, by the Instituto de Radioprotecao e Dosimetria/CNEN, of Rosenstein's method for dose calculations in Radiodiagnostic examinations and Kramer's method in case of occupational irradiation. It was verified by Eckerman and Simpson that for monoenergetic gamma emitters uniformly distributed within the bone mineral of the skeleton the dose in the bone surface can be several times higher than dose in skeleton. In this way, is also proposed the Calculation of tissue-air ratios for bone surfaces in some irradiation geometries and photon energies to be included in the Rosenstein's method for organ dose calculation in Radiodiagnostic examinations. (Author)

  5. Radiation transport benchmarks for simple geometries with void regions using the spherical harmonics method

    International Nuclear Information System (INIS)

    In 2001, an international cooperation on the 3D radiation transport benchmarks for simple geometries with void region was performed under the leadership of E. Sartori of OECD/NEA. There were contributions from eight institutions, where 6 contributions were by the discrete ordinate method and only two were by the spherical harmonics method. The 3D spherical harmonics program FFT3 by the finite Fourier transformation method has been improved for this presentation, and benchmark solutions for the 2D and 3D simple geometries with void region by the FFT2 and FFT3 are given showing fairly good accuracy. (authors)

  6. Gamma dose estimation with the thermoluminescence method

    Energy Technology Data Exchange (ETDEWEB)

    Kumamoto, Yoshikazu [National Inst. of Radiological Sciences, Chiba (Japan)

    1994-03-01

    Absorbed dose in radiation accidents can be estimated with the aid of materials which have the ability of dose recording and were exposed during the accidents. Quartz in bricks and tiles used to construct the buildings has the thermoluminescent properties. These materials exposed to radiations emit light when heated. Quartz and ruby have been used for the estimation of dose. The requirements for such dosemeters include; (1)the high kiln temperature at which all thermoluminescent energies accrued from natural radiations are erased. (2)the negligible fading of thermoluminescent energies after the exposure to radiations. (3)the determination of dose from natural radiations after the making of the matcrials. (4)the geometry of the place at which materials are collected. Bricks or tiles are crushed in the motar, sieved into sizes, washed with HF, HCl, alchol, aceton and water, and given with a known calibration dose. The pre-dose method and high-temperature method are used. In the former, glow curves with and without calibration dose are recorded. In the latter, glow peaks at 110degC with and without calibration dose are recorded after the heating of quartz up to 500degC. In this report, the method of the sample preparation, the measurement procedures and the results of dose estimation in the atomic bombing, iridium-192 and Chernobyl accident are described. (author).

  7. Application of the hybrid diffusion-transport spatial homogenization method to a high temperature test reactor benchmark problem

    International Nuclear Information System (INIS)

    The recently developed Hybrid Diffusion-Transport Spatial Homogenization (DTH) Method was previously tested on a benchmark problem typical of a boiling water reactor. In this paper, the DTH method is tested in a 1-D benchmark problem based on the Japanese High Temperature Test Reactor (HTTR). This acts as a verification of the method for a reactor that is optically thinner than the original BWR test benchmark. (author)

  8. Methods of assessing total doses integrated across pathways

    International Nuclear Information System (INIS)

    future years. C) Construct Individuals with high rates of consumption or occupancy across all pathways are used to derive rates for each pathway. These are applied in future years. D) Top-Two High and average consumption and occupancy rates for each pathway are derived. Doses can be calculated for all combinations where two pathways are considered at high rates and the remainder as average E) Profiling A profile is derived by calculating consumption and occupancy rates for each pathway for individuals who exhibit high rates for a single pathway. Other profiles may be built by repeating for other pathways. Total dose is the highest dose for any profile, and that profile becomes known as the critical group. Method A was used as a benchmark, with methods B -E compared according to the previously specified criteria. Overall the profiling method of total dose calculation was adopted due to its favourable overall comparison with the individual method and the homogeneity of the critical group selected. (authors)

  9. Benchmark calculations for evaluation methods of gas volumetric leakage rate

    International Nuclear Information System (INIS)

    A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)

  10. Benchmarking the inelastic neutron scattering soil carbon method

    Science.gov (United States)

    The herein described inelastic neutron scattering (INS) method of measuring soil carbon was based on a new procedure for extracting the net carbon signal (NCS) from the measured gamma spectra and determination of the average carbon weight percent (AvgCw%) in the upper soil layer (~8 cm). The NCS ext...

  11. Benchmark measurements and simulations of dose perturbations due to metallic spheres in proton beams

    International Nuclear Information System (INIS)

    Monte Carlo simulations are increasingly used for dose calculations in proton therapy due to its inherent accuracy. However, dosimetric deviations have been found using Monte Carlo code when high density materials are present in the proton beamline. The purpose of this work was to quantify the magnitude of dose perturbation caused by metal objects. We did this by comparing measurements and Monte Carlo predictions of dose perturbations caused by the presence of small metal spheres in several clinical proton therapy beams as functions of proton beam range and drift space. Monte Carlo codes MCNPX, GEANT4 and Fast Dose Calculator (FDC) were used. Generally good agreement was found between measurements and Monte Carlo predictions, with the average difference within 5% and maximum difference within 17%. The modification of multiple Coulomb scattering model in MCNPX code yielded improvement in accuracy and provided the best overall agreement with measurements. Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy beams when short drift spaces are involved. - Highlights: • We compared measurements and Monte Carlo predictions of dose perturbations caused by the metal objects in proton beams. • Different Monte Carlo codes were used, including MCNPX, GEANT4 and Fast Dose Calculator. • Good agreement was found between measurements and Monte Carlo simulations. • The modification of multiple Coulomb scattering model in MCNPX code yielded improved accuracy. • Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy

  12. Benchmarking ortholog identification methods using functional genomics data

    OpenAIRE

    Hulsen, T.; Huynen, M.A.; de Vlieg, J; Groenen, P.M.A.

    2006-01-01

    BACKGROUND: The transfer of functional annotations from model organism proteins to human proteins is one of the main applications of comparative genomics. Various methods are used to analyze cross-species orthologous relationships according to an operational definition of orthology. Often the definition of orthology is incorrectly interpreted as a prediction of proteins that are functionally equivalent across species, while in fact it only defines the existence of a common ancestor for a gene...

  13. Some benchmark shielding problems solved by the finite element method

    International Nuclear Information System (INIS)

    Some of the test cases on bulk shields for the two-dimensional codes MARC, TRIMOM and FELICIT are described. These codes use spherical harmonic expansions for neutron directions and a finite element grid over space. MARC was developed primarily as a reactor physics code with a finite element option and it assumes isotropic scattering. TRIMOM is being developed as a general purpose shielding code for anisotropic scatterers. FELICIT is being developed as a module of TRIMOM for cylindrical systems. All three codes employ continuous trial functions at present. Exploratory work on the use of discontinuous trial functions is described. Discontinuous trial functions permit the splicing of methods which use different angular expansions, so that, for example, transport theory can be used where it is necessary and diffusion theory can be used elsewhere. (author)

  14. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-01

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets. PMID:26771261

  15. DFT methods for conjugated materials: From benchmarks to functionals

    Science.gov (United States)

    Sears, John; Bredas, Jean-Luc

    2012-02-01

    From a theoretical standpoint, many of the problems of interest in the study of pi-conjugated materials for organic electronics applications pose a particular challenge for many modern density functional theory methods. Systematic errors have been observed, for instance, in the description of charge-transfer excitations at donor/acceptor interfaces, in linear and non-linear polarizabilites, as well as in the geometric and electronic properties of conjugated polymers [1,2]. We will discuss recent results in our lab aimed at: (i) understanding the sources of error for some of these problems; (ii) addressing these errors using tuned long-range corrected functionals; and (iii) using these results to guide the development of state-of-the-art methodologies in a new open-source DFT code. [4pt] [1] J. S. Sears, T. Korzdorfer, C. R. Zhang, and J. L. Bredas, J. Chem. Phys. 135 151103 (2011)[0pt] [2] T. Korzdorfer, J. S. Sears, C. Sutton, and J. L. Bredas, J. Chem. Phys., accepted.

  16. Solution of the WFNDEC 2015 eddy current benchmark with surface integral equation method

    Science.gov (United States)

    Demaldent, Edouard; Miorelli, Roberto; Reboud, Christophe; Theodoulidis, Theodoros

    2016-02-01

    In this paper, a numerical solution of WFNDEC 2015 eddy current benchmark is presented. In particular, the Surface Integral Equation (SIE) method has been employed for numerically solving the benchmark problem. The SIE method represent an effective and efficient alternative to standard numerical solver like Finite Element Method (FEM) when electromagnetic fields need to be calculated in problems involving homogeneous media. The formulation of SIE method allows to properly solve the electromagnetic problem by meshing the surface of the media instead to the complete media volume as done in FEM. The surface meshing enables to describe the problem with a smaller number of unknowns with respect to FEM. This property is directly translated in an obvious gain in terms of CPU time efficiency.

  17. Dose mapping simulation using the MCNP code for the Syrian gamma irradiation facility and benchmarking

    International Nuclear Information System (INIS)

    Highlights: • The MCNP4C was used to calculate the gamma ray dose rate spatial distribution in for the SGIF. • Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter was conducted as well. • Good agreements were noticed between the calculated and measured results. • The maximum relative differences were less than 7%, 4% and 4% in the x, y and z directions respectively. - Abstract: A three dimensional model for the Syrian gamma irradiation facility (SGIF) is developed in this paper to calculate the gamma ray dose rate spatial distribution in the irradiation room at the 60Co source board using the MCNP-4C code. Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter is conducted as well to compare the calculated and measured results. Good agreements are noticed between the calculated and measured results with maximum relative differences less than 7%, 4% and 4% in the x, y and z directions respectively. This agreement indicates that the established model is an accurate representation of the SGIF and can be used in the future to make the calculation design for a new irradiation facility

  18. Concordance of Transcriptional and Apical Benchmark Dose Levels for Conazole-Induced Liver Effects in Mice

    Science.gov (United States)

    ABSTRACT The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode of action determinations and improves quantitative risk assessments. Previous transcription-based microarra...

  19. Validation and benchmarking of calculation methods for photon and neutron transport at cask configurations

    International Nuclear Information System (INIS)

    The reliability of calculation tools to evaluate and calculate dose rates appearing behind multi-layered shields is important with regard to the certification of transport and storage casks. Actual benchmark databases like SINBAD do not offer such configurations because they were developed for reactor and accelerator purposes. Due to this, a bench-mark-suite based on own experiments that contain dose rates measured in different distances and levels from a transport and storage cask and on a public benchmark to validate Monte-Carlo-transport-codes has been developed. The analysed and summarised experiments include a 60Co point-source located in a cylindrical cask, a 252Cf line-source shielded by iron and polyethylene (PE) and a bare 252Cf source moderated by PE in a concrete-labyrinth with different inserted shielding materials to quantify neutron streaming effects on measured dose rates. In detail not only MCNPTM (version 5.1.6) but also MAVRIC, included in the SCALE 6.1 package, have been compared for photon and neutron transport. Aiming at low deviations between calculation and measurement requires precise source term specification and exact measurements of the dose rates which have been evaluated carefully including known uncertainties. In MAVRIC different source-descriptions with respect to the group-structure of the nuclear data library are analysed for the calculation of gamma dose rates because the energy lines of 60Co can only be modelled in groups. In total the comparison shows that MCNPTM fits very wall to the measurements within up to two standard deviations and that MAVRIC behaves similarly under the prerequisite that the source-model can be optimized. (author)

  20. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods.

    Science.gov (United States)

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods. PMID:27190234

  1. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  2. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    International Nuclear Information System (INIS)

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numerical study of the relativistic two-stream instability completes the set of benchmarking tests

  3. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    Energy Technology Data Exchange (ETDEWEB)

    Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org [Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, Nebraska 68588 (United States)

    2016-01-15

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.

  4. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  5. Household Electricity Demand Forecasting -- Benchmarking State-of-the-Art Methods

    OpenAIRE

    Veit, Andreas; Goebel, Christoph; Tidke, Rohit; Doblander, Christoph; Jacobsen, Hans-Arno

    2014-01-01

    The increasing use of renewable energy sources with variable output, such as solar photovoltaic and wind power generation, calls for Smart Grids that effectively manage flexible loads and energy storage. The ability to forecast consumption at different locations in distribution systems will be a key capability of Smart Grids. The goal of this paper is to benchmark state-of-the-art methods for forecasting electricity demand on the household level across different granularities and time scales ...

  6. Comparison of in vitro and in vivo clastogenic potency based on benchmark dose analysis of flow cytometric micronucleus data.

    Science.gov (United States)

    Bemis, Jeffrey C; Wills, John W; Bryce, Steven M; Torous, Dorothea K; Dertinger, Stephen D; Slob, Wout

    2016-05-01

    The application of flow cytometry as a scoring platform for both in vivo and in vitro micronucleus (MN) studies has enabled the efficient generation of high quality datasets suitable for comprehensive assessment of dose-response. Using this information, it is possible to obtain precise estimates of the clastogenic potency of chemicals. We illustrate this by estimating the in vivo and the in vitro potencies of seven model clastogenic agents (melphalan, chlorambucil, thiotepa, 1,3-propane sultone, hydroxyurea, azathioprine and methyl methanesulfonate) by deriving BMDs using freely available BMD software (PROAST). After exposing male rats for 3 days with up to nine dose levels of each individual chemical, peripheral blood samples were collected on Day 4. These chemicals were also evaluated for in vitro MN induction by treating TK6 cells with up to 20 concentrations in quadruplicate. In vitro MN frequencies were determined via flow cytometry using a 96-well plate autosampler. The estimated in vitro and in vivo BMDs were found to correlate to each other. The correlation showed considerable scatter, as may be expected given the complexity of the whole animal model versus the simplicity of the cell culture system. Even so, the existence of the correlation suggests that information on the clastogenic potency of a compound can be derived from either whole animal studies or cell culture-based models of chromosomal damage. We also show that the choice of the benchmark response, i.e. the effect size associated with the BMD, is not essential in establishing the correlation between both systems. Our results support the concept that datasets derived from comprehensive genotoxicity studies can provide quantitative dose-response metrics. Such investigational studies, when supported by additional data, might then contribute directly to product safety investigations, regulatory decision-making and human risk assessment. PMID:26049158

  7. Calculation of the IAEA ADS neutronics benchmark (stage-1) (2D discrete coordinate method)

    International Nuclear Information System (INIS)

    To study the neutronics for the ADS system, a set of computation software based on discrete ordinate method is selected and established. The set is tested through an IAEA benchmark. In the test process, the understanding and using of this software set are improved. The benchmark is analyzed. The calculations include the effective multiplication factor keff , the required strength of the spallation neutron source for 1.5 GW thermal power, the distribution of power density and the spectrum index, and the void effect at the beginning of life, BOL; the spatial and time-dependent density distribution of various nuclides (actinides and fission products) for burn-up process. The results are given in figures and tables and are consistent with calculations made abroad. The conclusion is that this software set can be applied to the optimization of design study for the ADS system

  8. Implementation of convergence judgment method to OECD/NEA benchmark problems

    International Nuclear Information System (INIS)

    To improve the slow convergence of fission source distribution, fission source acceleration methods have been so far developed and incorporated in the ordinary Monte Carlo calculations by using the fission matrix eigenvector. In ICNC2003, a convergence judgment method involving an effective initial acceleration procedure is proposed by the authors in the poster presentation, and here, the proposed convergence judgment method is practically applied to the OECD/NEA source convergence benchmark problems. In this application process, some difficulties arise and are investigated to derive a solution to them. (author)

  9. Computational efficiency and accuracy of the fission collision separation method in 3D HTTR benchmark problems

    International Nuclear Information System (INIS)

    A fission collision separation method has been recently developed to significantly improve the computational efficiency of the COMET response coefficient generator. In this work, the accuracy and efficiency of the new response coefficient generation method is tested in 3D HTTR benchmark problems at both lattice and core levels. In lattice calculations, the surface-to-surface and fission density response coefficients computed by the new method are compared with those directly calculated by the Monte Carlo method. In whole core calculations, the eigenvalues and bundle/pin fission densities predicated by COMET based on the response coefficient libraries generated by the fission collision separation method are compared with those based on the interpolation method as well as the Monte Carlo reference solutions. These comparisons have shown that the new response coefficient generation method is significantly (about 3 times) faster than the interpolations method while its accuracy is close to that of the interpolation method. (author)

  10. Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking

    Directory of Open Access Journals (Sweden)

    Mircea BOSCOIANU

    2013-03-01

    Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.

  11. Initial Hybrid Method for Analyzing Software Estimation, Benchmarking and Risk Assessment Using Design of Software

    Directory of Open Access Journals (Sweden)

    J. F. Vijay

    2009-01-01

    Full Text Available Problem statement: Estimation models in software engineering are used to predict some important attributes of future entities such as development effort, software reliability and programmers productivity. Among these models, those estimating software effort have motivated considerable research in recent years. Approach: In this study we discussed an available work on the effort estimation methods and also proposed a hybrid method for effort estimation process. As an initial approach to hybrid technology, we developed a simple approach to SEE based on use case models: The "use case point's method". This method is not new, but has not become popular although it is easy to understand and implement. We therefore investigated this promising method, which was inspired by function points analysis. Results: Reliable estimates can be calculated by using our method in a short time with the aid of a spreadsheet. Conclusion: We are planning to extend its applicability to estimate risk and benchmarking measures.

  12. Physical methods for dose determinations in mammography

    International Nuclear Information System (INIS)

    There is small but significant risk of radiation induced carcinogenesis associated with mammography. High quality mammography is the best method of early breast cancer detection. Besides, image as a basic requirement for an effective diagnosis, radiation protection principles require the radiation dose to the imaged tissue to be as low as compatible with required image quality. Glandular tissues is the most radiosensitive, thus the evaluation of Mean Glandular Dose (MGD) is the most relevant factor for estimation of radiation risk as well as the comparison of performance at different mammographic machines. MGD was estimated using Entrance Surface Air KERMA at the breast surface Kf measured free in air and appropriate conversation factors. Under evaluation were eight mammographic machines at institute of radiology, Skopje and mammographic machines at the Health's centers in Vevchani, Bitola, Prilep, Negotino and Shtip. Estimated values of MGD do not exceed the European reference level (<2mGy), but it can not be generally concluded for all mammography units in Macedonia, until their examination. In the near future all mammography units will be subject of Q C tests and dose measurements. (Author)

  13. Reliable B cell epitope predictions: impacts of method development and improved benchmarking.

    Directory of Open Access Journals (Sweden)

    Jens Vindahl Kringelum

    Full Text Available The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological context not against the antigen monomer. Improper dealing with these aspects leads to many artificial false positive predictions and hence to incorrect low performance values. To demonstrate the impact of proper benchmark definitions, we here present an updated version of the DiscoTope method incorporating a novel spatial neighborhood definition and half-sphere exposure as surface measure. Compared to other state-of-the-art prediction methods, Discotope-2.0 displayed improved performance both in cross-validation and in independent evaluations. Using DiscoTope-2.0, we assessed the impact on performance when using proper benchmark definitions. For 13 proteins in the training data set where sufficient biological information was available to make a proper benchmark redefinition, the average AUC performance was improved from 0.791 to 0.824. Similarly, the average AUC performance on an independent evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve

  14. The benchmark approach applied to a 28-day toxicity study with Rhodorsil Silane in rats. the impact of increasing the number of dose groups.

    Science.gov (United States)

    Woutersen, R A; Jonker, D; Stevenson, H; te Biesebeek, J D; Slob, W

    2001-07-01

    The OECD study design, aimed at obtaining a no-observed-adverse-effect level (NOAEL), may be suboptimal for deriving a benchmark dose. Therefore the present subacute (28-day) study was carried out to evaluate a multiple dose study design and to compare the results with the common OECD design. Seven groups of 10 female rats each were intragastrically administered corn oil without (controls) or with 50, 150, 300, 450, 600 or 750 mg Rhodorsil Silane/kg body weight/day, once daily (7 days/week) for 4 weeks. From the complete dataset, two subsets were selected, one representing a study design with seven dose groups of five animals (7 x 5 design), the other representing a study design with four dose groups of 10 animals (4 x 10 design). Under the conditions of the present study, the NOAEL for Rhodorsil Silane 198 was assessed at 50 mg/kg body weight/day, based on the data of the 4 x 10 design. The benchmark approach resulted in a benchmark dose of 19 mg/kg body weight/day, based on the data of the 7 x 5 design. Comparison of the results demonstrated that the multiple dose (7 x 5) design led to a more reliable result than the OECD (4 x 10) design, despite the smaller total number of animals. The dose-response analysis showed that at "the NOAEL" the effect on relative spleen weight was larger than 10%, illustrating that at the NOAEL, adverse effects may occur. PMID:11397516

  15. An energy transfer method for 4D Monte Carlo dose calculation.

    Science.gov (United States)

    Siebers, Jeffrey V; Zhong, Hualiang

    2008-09-01

    This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862

  16. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    International Nuclear Information System (INIS)

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections

  17. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    Energy Technology Data Exchange (ETDEWEB)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  18. Fast neutron fluence calculation benchmark analysis based on 3D MC-SN bidirectional coupling method

    International Nuclear Information System (INIS)

    The Monte Carlo (MC)-discrete ordinates (SN) bidirectional coupling method is an efficient approach to solve shielding calculation of the large complex nuclear facility. The test calculation was taken by the application of the MC-SN bidirectional coupling method on the shielding calculation of the large PWR nuclear facility. Based on the characteristics of NUREG/CR-6115 PWR benchmark model issued by the NRC, 3D Monte Carlo code was employed to accurately simulate the structure from the core to the thermal shield and the dedicated model of the calculation parts locating in the pressure vessel, while the TORT was used for the calculation from the thermal shield to the second down-comer region. The transform between particle probability distribution of MC and angular flux density of SN was realized by the interface program to achieve the coupling calculation. The calculation results were compared with MCNP and DORT solutions of benchmark report and satisfactory agreements were obtained. The preliminary validity of feasibility by using the method to solve shielding problem of a large complex nuclear device was proved. (authors)

  19. A simple analytical method for heterogeneity corrections in low dose rate prostate brachytherapy

    Science.gov (United States)

    Hueso-González, Fernando; Vijande, Javier; Ballester, Facundo; Perez-Calatayud, Jose; Siebert, Frank-André

    2015-07-01

    In low energy brachytherapy, the presence of tissue heterogeneities contributes significantly to the discrepancies observed between treatment plan and delivered dose. In this work, we present a simplified analytical dose calculation algorithm for heterogeneous tissue. We compare it with Monte Carlo computations and assess its suitability for integration in clinical treatment planning systems. The algorithm, named as RayStretch, is based on the classic equivalent path length method and TG-43 reference data. Analytical and Monte Carlo dose calculations using Penelope2008 are compared for a benchmark case: a prostate patient with calcifications. The results show a remarkable agreement between simulation and algorithm, the latter having, in addition, a high calculation speed. The proposed analytical model is compatible with clinical real-time treatment planning systems based on TG-43 consensus datasets for improving dose calculation and treatment quality in heterogeneous tissue. Moreover, the algorithm is applicable for any type of heterogeneities.

  20. The MIRD method of estimating absorbed dose

    Energy Technology Data Exchange (ETDEWEB)

    Weber, D.A.

    1991-01-01

    The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine.

  1. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    OpenAIRE

    Yu.V. Dvirko

    2013-01-01

    The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalizatio...

  2. A simple method for solar energetic particle event dose forecasting

    International Nuclear Information System (INIS)

    Bayesian, non-linear regression models or artificial neural networks are used to make predictions of dose and dose rate time profiles using calculated dose and/or dose rates soon after event onset. Both methods match a new event to similar historical events before making predictions for the new events. The currently developed Bayesian method categorizes a new event based on calculated dose rates up to 5 h (categorization window) after event onset. Categories are determined using ranges of dose rates from previously observed SEP events. These categories provide a range of predicted asymptotic dose for the new event. The model then goes on to make predictions of dose and dose rate time profiles out to 120 h beyond event onset. We know of no physical significance to our 5 h categorization window. In this paper, we focus on the efficacy of a simple method for SEP event asymptotic dose forecasting. Instead of making temporal predictions of dose and dose rate, we investigate making predictions of ranges of asymptotic dose using only dose rates at times prior to 5 h after event onset. A range of doses may provide sufficient information to make operational decisions such as taking emergency shelter or commencing/canceling extra-vehicular operations. Specifically, predicted ranges of doses that are found to be insignificant for the effect of interest would be ignored or put on a watch list while predicted ranges of greater significance would be used in the operational decision making progress

  3. Analysis of Sneak-7A critical benchmark using 3-D deterministic transport and sensitivity analysis methods

    International Nuclear Information System (INIS)

    Some high-quality reactor physics benchmark experiments are being re-evaluated with today's state-of-the-art methods, particularly using that of detailed 3-dimensional models. One experiment analysed in the framework of the International Reactor Physics Benchmark Experiments (IRPhE) project is SNEAK-7A. This assembly is characterised by a Pu-fuelled fast critical assembly in the Karlsruhe Fast Critical Facility for the purpose of testing cross section data and calculational methods. As the detailed information on the SNEAK-7A benchmark experiment becomes available, the purpose of this paper is to model this experiment as closely as possible to the configuration as it existed in the critical facility. The experimental keff was determined to be 1.0010, which is 29.6 cents supercritical. The realistic modelling of the SNEAK-7A assembly was performed using the DANTSYS code capability for X-Y-Z geometry. The calculated core eigenvalue from THREEDANT is 1.00975. With corrections applied for core plate cell heterogeneity and mesh sizes, the best-estimate core criticality with JEF-2.2-based cross-sections turns out to be 1.01137. While the plate heterogeneity effect from flux redistribution was at first estimated to be as large as 387 pcm from plate cell calculations, it proves to be 142 pcm when the core-wide heterogeneity effects are accounted for. In order to figure out the over-prediction of core eigenvalue, spectral indices are investigated, which led to projecting that the 238U capture cross-sections are being underestimated. This fact is confirmed in the comparison of the central material worth of 238U with the measured value. When the sensitivity of core eigenvalue to the cross section is used, the newly estimated core eigenvalue is 1.00175, which is very close to the measured core eigenvalue, when the 238U capture cross-section was assumed to increase by 5% implied from the comparison of spectral indices. Once the details in the old critical experiments are

  4. Analysis of Sneak-7A critical benchmark using 3-D deterministic transport and sensitivity analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S.J. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kodeli, I.; Sartori, E. [OECD NEA DataBank, 92 - Issy les Moulineaux (France)

    2003-07-01

    Some high-quality reactor physics benchmark experiments are being re-evaluated with today's state-of-the-art methods, particularly using that of detailed 3-dimensional models. One experiment analysed in the framework of the International Reactor Physics Benchmark Experiments (IRPhE) project is SNEAK-7A. This assembly is characterised by a Pu-fuelled fast critical assembly in the Karlsruhe Fast Critical Facility for the purpose of testing cross section data and calculational methods. As the detailed information on the SNEAK-7A benchmark experiment becomes available, the purpose of this paper is to model this experiment as closely as possible to the configuration as it existed in the critical facility. The experimental keff was determined to be 1.0010, which is 29.6 cents supercritical. The realistic modelling of the SNEAK-7A assembly was performed using the DANTSYS code capability for X-Y-Z geometry. The calculated core eigenvalue from THREEDANT is 1.00975. With corrections applied for core plate cell heterogeneity and mesh sizes, the best-estimate core criticality with JEF-2.2-based cross-sections turns out to be 1.01137. While the plate heterogeneity effect from flux redistribution was at first estimated to be as large as 387 pcm from plate cell calculations, it proves to be 142 pcm when the core-wide heterogeneity effects are accounted for. In order to figure out the over-prediction of core eigenvalue, spectral indices are investigated, which led to projecting that the {sup 238}U capture cross-sections are being underestimated. This fact is confirmed in the comparison of the central material worth of {sup 238}U with the measured value. When the sensitivity of core eigenvalue to the cross section is used, the newly estimated core eigenvalue is 1.00175, which is very close to the measured core eigenvalue, when the {sup 238}U capture cross-section was assumed to increase by 5% implied from the comparison of spectral indices. Once the details in the old critical

  5. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    CERN Document Server

    Ciolek, Glenn E

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests ...

  6. Methodic of dose planning for WWER-1000 power units

    International Nuclear Information System (INIS)

    Methods of minimization of dose loads for Zaporozhe NPP personnel were studied. They are aimed decrease the dose limits for reactor personnel to 20 mSv/year on the base of organization and technical improvements and ALARA principle

  7. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    International Nuclear Information System (INIS)

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  8. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    Energy Technology Data Exchange (ETDEWEB)

    Ciolek, Glenn E.; Roberge, Wayne G., E-mail: cioleg@rpi.edu, E-mail: roberw@rpi.edu [New York Center for Astrobiology (United States); Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180 (United States)

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  9. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  10. Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods

    DEFF Research Database (Denmark)

    Zhang, Hao; Lundegaard, Claus; Nielsen, Morten

    2009-01-01

    MHCpan methods. Conclusions: The benchmark demonstrated that pan-specific methods do provide accurate predictions also for previously uncharacterized MHC molecules. The NetMHCpan method trained to predict actual binding affinities was consistently top ranking both on quantitative (affinity) and binary (ligand......) data. However, the KISS method trained to predict binary data was one of the best performing when benchmarked on binary data. Finally, a consensus method integrating predictions from the two best-performing methods was shown to improve the prediction accuracy. Associate Editor: Prof. Thomas Lengauer....... emerging pathogens. Methods have recently been published that are able to predict peptide binding to any human MHC class I molecule. In contrast to conventional allele-specific methods, these methods do allow for extrapolation to un-characterized MHC molecules. These pan-specific HLA predictors have not...

  11. Benchmarking of a novel contactless characterisation method for micro thermoelectric modules (μTEMs)

    International Nuclear Information System (INIS)

    Significant challenges exist in the thermal control of Photonics Integrated Circuits (PICs) for use in optical communications. Increasing component density coupled with greater functionality is leading to higher device-level heat fluxes, stretching the capabilities of conventional cooling methods using thermoelectric modules (TEMs). A tailored thermal control solution incorporating micro thermoelectric modules (μTEMs) to individually address hotspots within PICs could provide an energy efficient alternative to existing control methods. Performance characterisation is required to establish the suitability of commercially-available μTEMs for the operating conditions in current and next generation PICs. The objective of this paper is to outline a novel method for the characterisation of thermoelectric modules (TEMs), which utilises infra-red (IR) heat transfer and temperature measurement to obviate the need for mechanical stress on the upper surface of low compression tolerance (∼0.5N) μTEMs. The method is benchmarked using a commercially-available macro scale TEM, comparing experimental data to the manufacturer's performance data sheet.

  12. Benchmarking of the 3-D CAD-based Discrete Ordinates code “ATTILA” for dose rate calculations against experiments and Monte Carlo calculations

    International Nuclear Information System (INIS)

    Shutdown dose rate (SDDR) inside and around the diagnostics ports of ITER is performed at PPPL/UCLA using the 3-D, FEM, Discrete Ordinates code, ATTILA, along with its updated FORNAX transmutation/decay gamma library. Other ITER partners assess SDDR using codes based on the Monte Carlo (MC) approach (e.g. MCNP code) for transport calculation and the radioactivity inventory code FISPACT or other equivalent decay data libraries for dose rate assessment. To reveal the range of discrepancies in the results obtained by various analysts, an extensive experimental and calculation benchmarking effort has been undertaken to validate the capability of ATTILA for dose rate assessment. On the experimental validation front, the comparison was performed using the measured data from two SDDR experiments performed at the FNG facility, Italy. Comparison was made to the experimental data and to MC results obtained by other analysts. On the calculation validation front, the ATTILA's predictions were compared to other results at key locations inside a calculation benchmark whose configuration duplicates an upper diagnostics port plug (UPP) in ITER. Both serial and parallel version of ATTILA-7.1.0 are used in the PPPL/UCLA analysis performed with FENDL-2.1/FORNAX databases. In the FNG 1st experimental, it was shown that ATTILA's dose rates are largely over estimated (by ∼30–60%) with the ANSI/ANS-6.1.1 flux-to-dose factors whereas the ICRP-74 factors give better agreement (10–20%) with the experimental data and with the MC results at all cooling times. In the 2nd experiment, there is an under estimation in SDDR calculated by both MCNP and ATTILA based on ANSI/ANS-6.1.1 for cooling times up to ∼4 days after irradiation. Thereafter, an over estimation is observed (∼5–10% with MCNP and ∼10–15% with ATTILA). As for the calculation benchmark, the agreement is much better based on ICRP-74 1996 data. The divergence among all dose rate results at ∼11 days cooling time is no

  13. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  14. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  15. An Unbiased Method To Build Benchmarking Sets for Ligand-Based Virtual Screening and its Application To GPCRs

    OpenAIRE

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the...

  16. An embedded semi-analytical benchmark via iterative interpolation for neutron transport methods verification

    International Nuclear Information System (INIS)

    A new multidimensional semi-analytical benchmark capability is developed. The key feature in the solution is the point kernel formulation. The 3D nature of the source is inherited in the flux making this a true multidimensional test. In addition, an efficient numerical scheme, called iterative interpolation, is used to evaluate the required point kernel solution and maintain benchmark accuracy. The EVENT finite element transport algorithm is compared to the point source solution as the first step of embedding the benchmark directly with the EVENT code. Additional code comparisons will be presented. (authors)

  17. Simulation Methods for High-Cycle Fatigue-Driven Delamination using Cohesive Zone Models - Fundamental Behavior and Benchmark Studies

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.;

    2015-01-01

    A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare the...... traction-separation response in the cohesive zone and the transition phase from quasistatic to fatigue loading for each method. Furthermore, the accuracy of the predicted crack growth rate is studied and compared for each method. It is shown that the method described in [2] is significantly more accurate...... than the other methods [3-6]. Finally, studies are presented of the dependency and sensitivity to the change in different quasi-static material parameters and model specific fitting parameters. It is shown that all the methods except [2] rely on different parameters which are not possible to determine...

  18. Control volume method for hydromagnetic dynamos in rotating spherical shells: Testing the code against the numerical dynamo benchmark

    Czech Academy of Sciences Publication Activity Database

    Šimkanin, Ján; Hejda, Pavel

    2009-01-01

    Roč. 53, č. 1 (2009), s. 99-110. ISSN 0039-3169 R&D Projects: GA AV ČR IAA300120704 Institutional research plan: CEZ:AV0Z30120515 Keywords : hydromagnetic dynamos * control volume method * numerical dynamo benchmark * efficiency of parallelization Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 1.000, year: 2009

  19. A TWO-DIMENSIONAL METHOD OF MANUFACTURED SOLUTIONS BENCHMARK SUITE BASED ON VARIATIONS OF LARSEN'S BENCHMARK WITH ESCALATING ORDER OF SMOOTHNESS OF THE EXACT SOLUTION

    Energy Technology Data Exchange (ETDEWEB)

    Sebastian Schunert; Yousry Y. Azmy

    2011-05-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally fine mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite first eliminates the aforementioned limitation of fine mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme.

  20. A two-dimensional method of manufactured solutions benchmark suite based on variations of Larsen's benchmark with escalating order of smoothness of the exact solution

    International Nuclear Information System (INIS)

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally ne mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite rst eliminates the aforementioned limitation of ne mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme. (author)

  1. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  2. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    International Nuclear Information System (INIS)

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR

  3. Survey of methods used to asses human reliability in the human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim to assess the state-of-the-art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participate in the HF-RBE, which is organised around two study cases: (1) analysis of routine functional test and maintenance procedures, with the aim to assess the probability of test-induced failures, the probability of failures to remain unrevealed, and the potential to initiate transients because of errors performed in the test; and (2) analysis of human actions during an operational transient, with the aim to assess the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. The paper briefly reports how the HF-RBE was structured and gives an overview of the methods that have been used for predicting human reliability in both study cases. The experience in applying these methods is discussed and the results obtained are compared. (author)

  4. Study on shielding design methods for fusion reactors using benchmark experiments

    International Nuclear Information System (INIS)

    In this study, a series of engineering benchmark experiments have been performed on the critical issues of shielding designs for DT fusion reactors. Based on the experiments, calculational accuracy of shielding design methods used in the ITER conceptual design, discrete ordinates code DOT3.5 and Monte Carlo code MCNP-3, have been estimated, and difficulties on calculational methods have been revealed. Furthermore, the feasibility for shielding designs have been examined with respect to a discrete ordinates code system BERMUDA which is developed to attain high accuracy of calculation. As for neutron streaming in an off-set narrow gap experimental assembly made of stainless steel, DOT3.5 and MCNP-3 codes reproduced the experiments within the accuracy presumed in the ITER conceptual design. DOT3.5 and MCNP-3 codes are available for secondary γ ray nuclear heating in a type 316L stainless steel assembly and neutron streaming in a multi-layered slit experimental assembly, respectively. Moreover, BERMUDA-2DN code is an effective tool as to neutron deep penetration in a type 316L stainless steel assembly and the neutron behavior in a large cavity experimental assembly. (author)

  5. Study on shielding design methods for fusion reactors using benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Nakashima, Hiroshi (Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment)

    1992-02-01

    In this study, a series of engineering benchmark experiments have been performed on the critical issues of shielding designs for DT fusion reactors. Based on the experiments, calculational accuracy of shielding design methods used in the ITER conceptual design, discrete ordinates code DOT3.5 and Monte Carlo code MCNP-3, have been estimated, and difficulties on calculational methods have been revealed. Furthermore, the feasibility for shielding designs have been examined with respect to a discrete ordinates code system BERMUDA which is developed to attain high accuracy of calculation. As for neutron streaming in an off-set narrow gap experimental assembly made of stainless steel, DOT3.5 and MCNP-3 codes reproduced the experiments within the accuracy presumed in the ITER conceptual design. DOT3.5 and MCNP-3 codes are available for secondary {gamma} ray nuclear heating in a type 316L stainless steel assembly and neutron streaming in a multi-layered slit experimental assembly, respectively. Moreover, BERMUDA-2DN code is an effective tool as to neutron deep penetration in a type 316L stainless steel assembly and the neutron behavior in a large cavity experimental assembly. (author).

  6. Application of a heterogeneous coarse mesh transport method to a MOX benchmark problem

    International Nuclear Information System (INIS)

    Recently, a coarse mesh transport method was extended to 2-D geometry by coupling Monte Carlo response function calculations to deterministic sweeps for converging the partial currents on the coarse mesh boundaries. More extensive testing of the new method has been performed with the previously published continuous energy benchmark problem, as well as the multigroup C5G7 MOX problem. The effect of the partial current representation in space, for the MOX problem, and in space and energy, for the smaller problem, on the accuracy of the results is the focus of this paper. For the MOX problem, accurate results were obtained with the assumption that the partial currents are piecewise-constant on four spatial segments per coarse mesh interface. Specifically, the errors in the system multiplication factor and the average absolute pin power were 0.12% and 0.68%, respectively. The root mean square and the mean relative pin power errors were 1.15% and 0.56%, respectively. (authors)

  7. Re-analysis of Alaskan benchmark glacier mass-balance data using the index method

    Science.gov (United States)

    Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.

    2010-01-01

    At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.

  8. Benchmarking as a method of assessment of region’s intellectual potential

    Directory of Open Access Journals (Sweden)

    P.G. Pererva

    2015-12-01

    innovative development of regions. It is asked to assess the intellectual potential of the region using benchmarking technology. Evaluation of potential IR regions and its impact on development remains a meaningful and necessary in terms more adequate to the real practice of development of economic systems diagrams, algorithms, models and methods of analysis, forecasting and designing the future. In the recognized global market leaders the largest international image companies constantly and consistently there is an active innovative process where in parallel, as the operational policy evolutionary upgrade policy and strategic developments of radical innovations with significant period from idea to its realization. This second direction uses benchmarking as the research methodology with useful results. Conclusions and directions of further researches. A scientific review of the nature and methods of the use of intellectual capital at this stage of economic development of Ukraine has its own challenges, among which are such areas of study as the structure of this capital, capacity assessment at the level of enterprises, regions and the country as a whole, identify the key factors influencing intellectual capital, evaluation of investment efficiency in support of it. Further studies relate to such unsolved issues as the relationship of the IR with the mechanisms of management of innovative development, efficiency of investments in IR, the need to ensure the intellectual development, assessment of intellectual potential of subjects of economic activity.

  9. Derivation of the critical effect size/benchmark response for the dose-response analysis of the uptake of radioactive iodine in the human thyroid.

    Science.gov (United States)

    Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A

    2016-08-22

    Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. PMID:27268963

  10. Benchmarking local public libraries using non-parametric frontier methods: A case study of Flanders

    OpenAIRE

    Stroobants, Jesse; Bouckaert, Geert

    2014-01-01

    Being faced with significant budget cuts and continual pressure to do more with less, issues of efficiency and effectiveness became a priority for local governments in most countries. In this context, benchmarking is widely acknowledged as a powerful tool for local performance management and for improving the efficiency and effectiveness of local service delivery. Performance benchmarking exercises are regularly carried out using ratio analysis, by comparing single indicators. Since this appr...

  11. Anomaly detection in OECD Benchmark data using co-variance methods

    International Nuclear Information System (INIS)

    OECD Benchmark data distributed for the SMORN VI Specialists Meeting in Reactor Noise were investigated for anomaly detection in artificially generated reactor noise benchmark analysis. It was observed that statistical features extracted from covariance matrix of frequency components are very sensitive in terms of the anomaly detection level. It is possible to create well defined alarm levels. (R.P.) 5 refs.; 23 figs.; 1 tab

  12. Development of Benchmarks for Operating Costs and Resources Consumption to be Used in Healthcare Building Sustainability Assessment Methods

    Directory of Open Access Journals (Sweden)

    Maria de Fátima Castro

    2015-09-01

    Full Text Available Since the last decade of the twentieth century, the healthcare industry is paying attention to the environmental impact of their buildings and therefore new regulations, policy goals, and Building Sustainability Assessment (HBSA methods are being developed and implemented. At the present, healthcare is one of the most regulated industries and it is also one of the largest consumers of energy per net floor area. To assess the sustainability of healthcare buildings it is necessary to establish a set of benchmarks related with their life-cycle performance. They are both essential to rate the sustainability of a project and to support designers and other stakeholders in the process of designing and operating a sustainable building, by allowing the comparison to be made between a project and the conventional and best market practices. This research is focused on the methodology to set the benchmarks for resources consumption, waste production, operation costs and potential environmental impacts related to the operational phase of healthcare buildings. It aims at contributing to the reduction of the subjectivity found in the definition of the benchmarks used in Building Sustainability Assessment (BSA methods, and it is applied in the Portuguese context. These benchmarks will be used in the development of a Portuguese HBSA method.

  13. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    Energy Technology Data Exchange (ETDEWEB)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu [Nuclear Engineering, Missouri University of Science and Technology, Rolla, Missouri 65409 (United States); Hsieh, Jiang [GE Healthcare, Waukesha, Wisconsin 53188 (United States)

    2015-07-15

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  14. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  15. Measurement of neutron flux spectra in a tungsten benchmark by neutron foil activation method

    International Nuclear Information System (INIS)

    The nuclear designs of fusion devices such as ITER (international thermonuclear experimental reactor), which is an experimental fusion reactor based on the ''tokamak'' concept, rely on the results of neutron physical calculations. These depend on the knowledge of the neutron and photon flux spectra which is particularly important because it permits to anticipate the possible answers of the whole structure to phenomena such as nuclear heating, tritium breeding, atomic displacements, radiation shielding, power generation and material activation. The flux spectra can be calculated with transport codes, but validating measurements are also required. An important constituent of structural materials and divertor areas of fusion reactors is tungsten. This thesis deals with the measurement of the neutron fluence and neutron energy spectrum in a tungsten assembly by means of multiple foil neutron activation technique. In order to check and qualify the experimental tools and the codes to be used in the tungsten benchmark experiment, test measurements in the D-T and D-D neutron fields of the neutron generator at Technische Universitaet Dresden were performed. The characteristics of the D-D and D-T reactions, used to produce monoenergetic neutrons, together with the selection of activation reactions suitable for fusion applications and details of the activation measurements are presented. Corrections related to the neutron irradiation process and those to the sample counting process are discussed, too. The neutron fluence and its energy distribution in a tungsten benchmark, irradiated at the frascati neutron generator with 14 MeV neutrons produced by the T(d,n)4He reaction, are then derived from the measurements of the neutron induced γ-ray activity in the foils using the STAYNL unfolding code, based on the linear least-squares-errors method, together with the IRDF-90.2 (international reactor dosimetry file) cross section library. The differences between the neutron flux

  16. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data

    International Nuclear Information System (INIS)

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques. The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected. The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1–5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. (paper)

  17. Benchmarking DFT and semiempirical methods on structures and lattice energies for ten ice polymorphs

    Science.gov (United States)

    Brandenburg, Jan Gerit; Maas, Tilo; Grimme, Stefan

    2015-03-01

    Water in different phases under various external conditions is very important in bio-chemical systems and for material science at surfaces. Density functional theory methods and approximations thereof have to be tested system specifically to benchmark their accuracy regarding computed structures and interaction energies. In this study, we present and test a set of ten ice polymorphs in comparison to experimental data with mass densities ranging from 0.9 to 1.5 g/cm3 and including explicit corrections for zero-point vibrational and thermal effects. London dispersion inclusive density functionals at the generalized gradient approximation (GGA), meta-GGA, and hybrid level as well as alternative low-cost molecular orbital methods are considered. The widely used functional of Perdew, Burke and Ernzerhof (PBE) systematically overbinds and overall provides inconsistent results. All other tested methods yield reasonable to very good accuracy. BLYP-D3atm gives excellent results with mean absolute errors for the lattice energy below 1 kcal/mol (7% relative deviation). The corresponding optimized structures are very accurate with mean absolute relative deviations (MARDs) from the reference unit cell volume below 1%. The impact of Axilrod-Teller-Muto (atm) type three-body dispersion and of non-local Fock exchange is small but on average their inclusion improves the results. While the density functional tight-binding model DFTB3-D3 performs well for low density phases, it does not yield good high density structures. As low-cost alternative for structure related problems, we recommend the recently introduced minimal basis Hartree-Fock method HF-3c with a MARD of about 3%.

  18. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data.

    Science.gov (United States)

    Behar, Joachim; Oster, Julien; Clifford, Gari D

    2014-08-01

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques.The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected.The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1-5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. PMID:25069410

  19. Monitoring methods for skin dose in interventional radiology

    Directory of Open Access Journals (Sweden)

    Abdulhamid Chaikh

    2015-03-01

    Full Text Available Interventional radiology makes an increasing use of X-ray for diagnostic and therapeutic procedures. The dose received by the patient sometime exceeds the threshold value of deterministic effects, and this requires monitoring of the dose delivered to the patients. Delivered dose could be assessed through either direct or indirect methods. The direct methods use dosimeters that are placed on the skin during the procedure, whereas, the indirect methods are based on measured quantities derived from the equipment itself. Each method has its own limitations; however, the main concern is the ability to measure the dose more accurately due to complexity of the anatomical structures of the patient and the variable course of each procedure. This review article summarizes the principle and main advantages and disadvantages of each method. A comparison of the performances of each method for interventional fluoroscopy and radiography in its ability to monitor the patient’s skin dose is provided. 

  20. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  1. Neutron Cross Section Processing Methods for Improved Integral Benchmarking of Unresolved Resonance Region Evaluations

    Science.gov (United States)

    Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; Brown, Forrest B.

    2016-03-01

    In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR). These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF) method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW) and multi-level Breit-Wigner (MLBW) formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both probability table

  2. Neutron Cross Section Processing Methods for Improved Integral Benchmarking of Unresolved Resonance Region Evaluations

    Directory of Open Access Journals (Sweden)

    Walsh Jonathan A.

    2016-01-01

    Full Text Available In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR. These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW and multi-level Breit-Wigner (MLBW formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both

  3. Classification of criticality calculations with correlation coefficient method and its application to OECD/NEA burnup credit benchmarks phase III-A and II-A

    International Nuclear Information System (INIS)

    A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)

  4. Benchmarking HRD.

    Science.gov (United States)

    Ford, Donald J.

    1993-01-01

    Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)

  5. APPLICATION OF PARAMETRIC AND NON-PARAMETRIC BENCHMARKING METHODS IN COST EFFICIENCY ANALYSIS OF THE ELECTRICITY DISTRIBUTION SECTOR

    Directory of Open Access Journals (Sweden)

    Andrea Furková

    2007-06-01

    Full Text Available This paper explores the aplication of parametric and non-parametric benchmarking methods in measuring cost efficiency of Slovak and Czech electricity distribution companies. We compare the relative cost efficiency of Slovak and Czech distribution companies using two benchmarking methods: the non-parametric Data Envelopment Analysis (DEA and the Stochastic Frontier Analysis (SFA as the parametric approach. The first part of analysis was based on DEA models. Traditional cross-section CCR and BCC model were modified to cost efficiency estimation. In further analysis we focus on two versions of stochastic frontier cost functioin using panel data: MLE model and GLS model. These models have been applied to an unbalanced panel of 11 (Slovakia 3 and Czech Republic 8 regional electricity distribution utilities over a period from 2000 to 2004. The differences in estimated scores, parameters and ranking of utilities were analyzed. We observed significant differences between parametric methods and DEA approach.

  6. Multidimensional benchmarking

    OpenAIRE

    Campbell, Akiko

    2016-01-01

    Benchmarking is a process of comparison between performance characteristics of separate, often competing organizations intended to enable each participant to improve its own performance in the marketplace (Kay, 2007). Benchmarking sets organizations’ performance standards based on what “others” are achieving. Most widely adopted approaches are quantitative and reveal numerical performance gaps where organizations lag behind benchmarks; however, quantitative benchmarking on its own rarely yi...

  7. Hanford dose overview program: standardized methods and data for Hanford environmental-dose calculations

    International Nuclear Information System (INIS)

    The Hanford Dose Overview Program is a Hanford site-wide service established to provide a method of assuring the consistency of Hanford-related environmental dose assessments. This document serves as a guide to the Hanford contractors for obtaining or performing Hanford-related environmental dose calculations. The program serves as a focal point for Hanford environmental dose calculation activities and provides a number of services for Hanford contractors involved in calculation of environmental doses. Site specific input data and assumptions have been compiled and are maintained for use by the contractors in calculating Hanford environmental doses. The data and assumptions, to the extent they apply, should be used in Hanford calculations. These data are not all inclusive and will be modified should additional or more appropriate information become available

  8. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    International Nuclear Information System (INIS)

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4 to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4 in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23.000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selected (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. This benchmark shows 2 main points. First, independent replicas are an appropriate method to achieve a fare variance estimation when dominance ratio is near 1. Secondly, the diffusion operator with 2 energy groups gives satisfactory results compared to TRIPOLI-4 even with a highly heterogeneous neutron flux map and an harder spectrum

  9. Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations

    International Nuclear Information System (INIS)

    Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm2) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within ±1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within ±2.0%(σMC=0.8%) in lung (ρ=0.24 g cm-3) and within ±2.9%(σMC=0.8%) in low-density lung (ρ=0.1 g cm-3). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity (ρ=0.001 g cm-3) were found to be within the range of ±1.5% to ±4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark (σMC=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within ±2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is

  10. Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image

    Science.gov (United States)

    Pirotti, F.; Sunar, F.; Piragnolo, M.

    2016-06-01

    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random

  11. Hanford Dose Overview Program: standardized methods and data for Hanford environmental dose calculations. Rev. 1

    International Nuclear Information System (INIS)

    This document serves as a guide to Hanford contractors for obtaining or performing Hanford-related environmental dose calculations. Because environmental dose estimation techniques are state-of-the-art and are continually evolving, the data and standard methods presented herein will require periodic revision. This document is scheduled to be updated annually, but actual changes to the program will be made more frequently if required. For this reason, PNL's Occupational and Environmental Protection Department should be contacted before any Hanford-related environmental dose calculation is performed. This revision of the Hanford Dose Overview Program Report primarily reflects changes made to the data and models used in calculating atmospheric dispersion of airborne effluents at Hanford. The modified data and models are described in detail. In addition, discussions of dose calculation methods and the review of calculation results have been expanded to provide more explicit guidance to the Hanford contractors. 19 references, 30 tables

  12. Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m

    Science.gov (United States)

    Czarnota, Karol; Gorbatov, Alexei

    2016-04-01

    In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single

  13. Benchmark Dose Analysis from Multiple Datasets: The Cumulative Risk Assessment for the N-Methyl Carbamate Pesticides

    Science.gov (United States)

    The US EPA’s N-Methyl Carbamate (NMC) Cumulative Risk assessment was based on the effect on acetylcholine esterase (AChE) activity of exposure to 10 NMC pesticides through dietary, drinking water, and residential exposures, assuming the effects of joint exposure to NMCs is dose-...

  14. Benchmarked Empirical Bayes Methods in Multiplicative Area-level Models with Risk Evaluation

    OpenAIRE

    Ghosh, Malay; Kubokawa, Tatsuya; Kawakubo, Yuki

    2014-01-01

    The paper develops empirical Bayes and benchmarked empirical Bayes estimators of positive small area means under multiplicative models. A simple example will be estimation of per capita income for small areas. It is now well-understood that small area estimation needs explicit, or at least implicit use of models. One potential difficulty with model-based estimators is that the overall estimator for a larger geographical area based on (weighted) sum of the model-based estimators is not necessa...

  15. Financial benchmarking

    OpenAIRE

    Boldyreva, Anna

    2014-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  16. A performance geodynamo benchmark

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2014-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e

  17. Methods for monitoring patient dose in dental radiology

    International Nuclear Information System (INIS)

    Different types of X-ray equipment are used in dental radiology, such as intra-oral, panoramic, cephalo-metric, cone-beam computed tomography (CBCT) and multi-slice computed tomography (MSCT) units. Digital receptors have replaced film and screen-film systems and other technical developments have been made. The radiation doses arising from different types of examination are sparsely documented and often expressed in different radiation quantities. In order to allow the comparison of radiation doses using conventional techniques, i.e. intra-oral, panoramic and cephalo-metric units, with those obtained using, CBCT or MSCT techniques, the same quantities and units of dose must be used. Dose determination should be straightforward and reproducible, and data should be stored for each image and clinical examination. It is shown here that air kerma-area product (PKA) values can be used to monitor the radiation doses used in all types of dental examinations including CBCT and MSCT. However, for the CBCT and MSCT techniques, the methods for the estimation of dose must be more thoroughly investigated. The values recorded can be used to determine the diagnostic standard doses and to set diagnostic reference levels for each type of clinical examination and equipment used. It should also be possible to use these values for the estimation and documentation of organ or effective doses. (authors)

  18. A method of estimating fetal dose during brain radiation therapy

    International Nuclear Information System (INIS)

    Purpose: To develop a simple method of estimating fetal dose during brain radiation therapy. Methods and Materials: An anthropomorphic phantom was modified to simulate pregnancy at 12 and 24 weeks of gestation. Fetal dose measurements were carried out using thermoluminescent dosimeters. Brain radiation therapy was performed with two lateral and opposed fields using 6 MV photons. Three sheets of lead, 5.1-cm-thick, were positioned over the phantom's abdomen to reduce fetal exposure. Linear and nonlinear regression analysis was used to investigate the dependence of radiation dose to an unshielded and/or shielded fetus upon field size and distance from field isocenter. Results: Formulas describing the exponential decrease of radiation dose to an unshielded and/or shielded fetus with distance from the field isocenter are presented. All fitted parameters of the above formulas can be easily derived using a set of graphs showing their correlation with field size. Conclusion: This study describes a method of estimating fetal dose during brain radiotherapy, accounting for the effects of gestational age, field size and distance from field isocenter. Accurate knowledge of absorbed dose to the fetus before treatment course allows for the selection of the proper irradiation technique in order to achieve the maximum patient benefit with the least risk to the fetus

  19. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added in...... order to obtain a unique selection...

  20. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  1. Precious benchmarking

    International Nuclear Information System (INIS)

    Recently, there has been a new word added to our vocabulary - benchmarking. Because of benchmarking, our colleagues travel to power plants all around the world and guests from the European power plants visit us. We asked Marek Niznansky from the Nuclear Safety Department in Jaslovske Bohunice NPP to explain us this term. (author)

  2. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  3. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    -tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.

  4. Calculation method for gamma-dose rates from spherical puffs

    International Nuclear Information System (INIS)

    The Lagrangian puff-models are widely used for calculation of the dispersion of atmospheric releases. Basic output from such models are concentrations of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on semi-infinite cloud model. This method is however only applicable for points far away from the release point. The exact calculation of the cloud dose using the volume integral requires significant computer time. The volume integral for the gamma dose could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor due to the fact that the same correction factors are used for all isotopes. The authors describe a more elaborate correction method. This method uses precalculated values of the gamma-dose rate as a function of the puff dispersion parameter (δp) and the distance from the puff centre for four energy groups. The release of energy for each radionuclide in each energy group has been calculated and tabulated. Based on these tables and a suitable interpolation procedure the calculation of gamma doses takes very short time and is almost independent of the number of radionuclides. (au) (7 tabs., 7 ills., 12 refs.)

  5. A unique manual method for emergency offsite dose calculations

    International Nuclear Information System (INIS)

    This paper describes a manual method developed for performance of emergency offsite dose calculations for PP and L's Susquehanna Steam Electric Station. The method is based on a three-part carbonless form. The front page guides the user through selection of the appropriate accident case and inclusion of meteorological and effluent data data. By circling the applicable accident descriptors, the user circles the dose factors on pages 2 and 3 which are then simply multiplied to yield the whole body and thyroid dose rates at the plant boundary, two, five, and ten miles. The process used to generate the worksheet is discussed, including the method used to incorporate the observed terrain effects on airflow patterns caused by the Susquehanna River Valley topography

  6. A NRC-BNL benchmark evaluation of seismic analysis methods for non-classically damped coupled systems

    International Nuclear Information System (INIS)

    Under the auspices of the U.S. Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with non-classical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were developed and analyzed by BNL for a suite of earthquakes. The BNL analysis was carried out by the Wilson-θ time domain integration method with the system-damping matrix computed using a synthesis formulation as presented in a companion paper [Nucl. Eng. Des. (2002)]. These benchmark problems were subsequently distributed to and analyzed by program participants applying their uniquely developed methods and computer programs. This paper is intended to offer a glimpse at the program, and provide a summary of major findings and principle conclusions with some representative results. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving license

  7. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  8. The lead cooled fast reactor benchmark Brest-300: analysis with sensitivity method

    International Nuclear Information System (INIS)

    Lead cooled fast neutrons reactor is one of the most interesting candidates for the development of atomic energy. BREST-300 is a 300 MWe lead cooled fast reactor developed by the NIKIET (Russia) with a deterministic safety approach which aims to exclude reactivity margins greater than the delayed neutron fraction. The development of innovative reactors (lead coolant, nitride fuel...) and fuel cycles with new constraints such as cycle closure or actinide burning, requires new technologies and new nuclear data. In this connection, the tool and neutron data used for the calculational analysis of reactor characteristics requires thorough validation. NIKIET developed a reactor benchmark fitting of design type calculational tools (including neutron data). In the frame of technical exchanges between NIKIET and EDF (France), results of this benchmark calculation concerning the principal parameters of fuel evolution and safety parameters has been inter-compared, in order to estimate the uncertainties and validate the codes for calculations of this new kind of reactors. Different codes and cross-sections data have been used, and sensitivity studies have been performed to understand and quantify the uncertainties sources.The comparison of results shows that the difference on keff value between ERANOS code with ERALIB1 library and the reference is of the same order of magnitude than the delayed neutron fraction. On the other hand, the discrepancy is more than twice bigger if JEF2.2 library is used with ERANOS. Analysis of discrepancies in calculation results reveals that the main effect is provided by the difference of nuclear data, namely U238, Pu239 fission and capture cross sections and lead inelastic cross sections

  9. Comparison of the dose evaluation methods for criticality accident

    International Nuclear Information System (INIS)

    The improvement of the dose evaluation method for criticality accidents is important to rationalize design of the nuclear fuel cycle facilities. The source spectrums of neutron and gamma ray of a criticality accident depend on the condition of the source, its materials, moderation, density and so on. The comparison of the dose evaluation methods for a criticality accident is made. Some methods, which are combination of criticality calculation and shielding calculation, are proposed. Prompt neutron and gamma ray doses from nuclear criticality of some uranium systems have been evaluated as the Nuclear Criticality Slide Rule. The uranium metal source (unmoderated system) and the uranyl nitrate solution source (moderated system) in the rule are evaluated by some calculation methods, which are combinations of code and cross section library, as follows: (a) SAS1X (ENDF/B-IV), (b) MCNP4C (ENDF/B-VI)-ANISN (DLC23E or JSD120), (c) MCNP4C-MCNP4C (ENDF/B-VI). They have consisted of criticality calculation and shielding calculation. These calculation methods are compared about the tissue absorbed dose and the spectrums at 2 m from the source. (author)

  10. Dose calculation of 6 MV Truebeam using Monte Carlo method

    International Nuclear Information System (INIS)

    The purpose of this work is to simulate 6 MV Varian Truebeam linac dosimeter characteristics using Monte Carlo method and to investigate the availability of phase space file and the accuracy of the simulation. With the phase space file at linac window supplied by Varian to be a source, the patient-dependent part was simulated. Dose distributions in a water phantom with a 10 cm × 10 cm field were calculated and compared with measured data for validation. Evident time reduction was obtained from 4-5 h which a whole simulation cost on the same computer to around 48 minutes. Good agreement between simulations and measurements in water was observed. Dose differences are less than 3% for depth doses in build-up region and also for dose profiles inside the 80% field size, and the effect in penumbra is good. It demonstrate that the simulation using existing phase space file as the EGSnrc source is efficient. Dose differences between calculated data and measured data could meet the requirements for dose calculation. (authors)

  11. Effect of radon measurement methods on dose estimation

    International Nuclear Information System (INIS)

    Different radon measurement methods were applied in the old and new buildings of the Turkish bath of Eger (Hungary), in order to elaborate a radon measurement protocol. Besides, measurements were also made concerning the radon and thoron short-lived decay products, gamma dose from external sources and water radon. The most accurate results for dose estimation were provided by the application of personal radon meters. Estimated annual effective doses from radon and its short-lived decay products in the old and new buildings, using 0.2 and 0.1 measured equilibrium factors, were 0.83 and 0.17 mSv, respectively. The effective dose from thoron short-lived decay products was only 5 % of these values. The respective external gamma radiation effective doses were 0.19 and 0.12 mSvy-1. Effective dose from the consumption of tap water containing radon was 0.05 mSvy-1, while in the case of spring water, it was 0.14 mSvy-1. (authors)

  12. Effect of radon measurement methods on dose estimation.

    Science.gov (United States)

    Kávási, Norbert; Kobayashi, Yosuke; Kovács, Tibor; Somlai, János; Jobbágy, Viktor; Nagy, Katalin; Deák, Eszter; Berhés, István; Bender, Tamás; Ishikawa, Tetsuo; Tokonami, Shinji; Vaupotic, Janja; Yoshinaga, Shinji; Yonehara, Hidenori

    2011-05-01

    Different radon measurement methods were applied in the old and new buildings of the Turkish bath of Eger, Hungary, in order to elaborate a radon measurement protocol. Besides, measurements were also made concerning the radon and thoron short-lived decay products, gamma dose from external sources and water radon. The most accurate results for dose estimation were provided by the application of personal radon meters. Estimated annual effective doses from radon and its short-lived decay products in the old and new buildings, using 0.2 and 0.1 measured equilibrium factors, were 0.83 and 0.17 mSv, respectively. The effective dose from thoron short-lived decay products was only 5 % of these values. The respective external gamma radiation effective doses were 0.19 and 0.12 mSv y(-1). Effective dose from the consumption of tap water containing radon was 0.05 mSv y(-1), while in the case of spring water, it was 0.14 mSv y(-1). PMID:21450699

  13. Comparison of organ dosimetry methods and effective dose calculation methods for paediatric CT

    International Nuclear Information System (INIS)

    Computed tomography (CT) is the single biggest ionising radiation risk from anthropogenic exposure. Reducing unnecessary carcinogenic risks from this source requires the determination of organ and tissue absorbed doses to estimate detrimental stochastic effects. In addition, effective dose can be used to assess comparative risk between exposure situations and facilitate dose reduction through optimisation. Children are at the highest risk from radiation induced carcinogenesis and therefore dosimetry for paediatric CT recipients is essential in addressing the ionising radiation health risks of CT scanning. However, there is no well-defined method in the clinical environment for routinely and reliably performing paediatric CT organ dosimetry and there are numerous methods utilised for estimating paediatric CT effective dose. Therefore, in this study, eleven computational methods for organ dosimetry and/or effective dose calculation were investigated and compared with absorbed doses measured using thermoluminescent dosemeters placed in a physical anthropomorphic phantom representing a 10 year old child. Three common clinical paediatric CT protocols including brain, chest and abdomen/pelvis examinations were evaluated. Overall, computed absorbed doses to organs and tissues fully and directly irradiated demonstrated better agreement (within approximately 50 %) with the measured absorbed doses than absorbed doses to distributed organs or to those located on the periphery of the scan volume, which showed up to a 15-fold dose variation. The disparities predominantly arose from differences in the phantoms used. While the ability to estimate CT dose is essential for risk assessment and radiation protection, identifying a simple, practical dosimetry method remains challenging.

  14. Fully Automated Treatment Planning for Head and Neck Radiotherapy using a Voxel-Based Dose Prediction and Dose Mimicking Method

    CERN Document Server

    McIntosh, Chris; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G

    2016-01-01

    Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present an atlas-based approach which learns a dose prediction model for each patient (atlas) in a training database, and then learns to match novel patients to the most relevant atlases. The method creates a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces any requirement for specifying dose-volume objectives for conveying the goals of treatment planning. A probabilistic dose distribution is inferred from the most relevant atlases, and is scalarized using a conditional random field to determine the most likely spatial distribution of dose to yield a specific dose prior (histogram) for relevant regions of interest. Voxel-based dose mimicking then converts the predicted dose distribution to a deliverable treatment plan dose distribution. In this study, we ...

  15. Calculation method for gamma dose rates from Gaussian puffs

    International Nuclear Information System (INIS)

    The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of Eγ, σy, the asymmetry factor - σy/σz, the height of puff center - H and the distance from puff center Rxy. To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs

  16. WLUP benchmarks

    International Nuclear Information System (INIS)

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  17. A simplified method to estimate gamma dose from atmospheric releases

    International Nuclear Information System (INIS)

    Computation of gamma dose due to atmospheric releases is a tedious and time consuming process needing a large and fast computer. A simple approximate procedure is evolved which circumvents the need of a large body of precalculated data. Error analysis of the method is also presented. (author)

  18. A systematic benchmark method for analysis and comparison of IMRT treatment planning algorithms

    International Nuclear Information System (INIS)

    Tools and procedures for evaluating and comparing different intensity-modulated radiation therapy (IMRT) systems are presented. IMRT is increasingly in demand and there are numerous systems available commercially. These programs introduce significantly different software to dosimetrists and physicists than conventional planning systems, and the options often seem initially overwhelmingly complex to the user. By creating geometric target volumes and critical normal tissues, the characteristics of the algorithms may be investigated, and the influence of the different parameters explored. Overall optimization strategies of the algorithm may be characterized by treating a square target volume (TV) with 2 perpendicular beams, with and without heterogeneities. A half-donut (hemi-annulus) TV with a 'donut hole' (central cylinder) critical normal tissue (CNT) on a CT of a simulated quality assurance phantom is suggested as a good geometry to explore the IMRT algorithm parameters. Using this geometry, the order of varying parameters is suggested. First is to determine the effects of the number of stratifications of optimized intensity fluence on the resulting dose distribution, and selecting a fixed number of stratifications for further studies. To characterize the dose distributions, a dose-homogeneity index (DHI) is defined as the ratio of the dose received by 90% of the volume to the minimum dose received by the ''hottest'' 10% of the volume. The next step is to explore the effects of priority and penalty on both the TV and the CNT. Then, choosing and fixing these parameters, the effects of varying the number of beams can be looked at. As well as evaluating the dose distributions (and DHI), the number of subfields and the number of monitor units required for different numbers of stratifications and beams can be evaluated

  19. Computerized simulation methods for dose reduction, in radiodiagnosis

    International Nuclear Information System (INIS)

    The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)

  20. Dosing method of physical activity in aerobics classes for students

    OpenAIRE

    Beliak Yu. I.; Zinchenko N.M.

    2014-01-01

    Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years). In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumb...

  1. Benchmark Calculations of Three-Body Intermolecular Interactions and the Performance of Low-Cost Electronic Structure Methods.

    Science.gov (United States)

    Řezáč, Jan; Huang, Yuanhang; Hobza, Pavel; Beran, Gregory J O

    2015-07-14

    Many-body noncovalent interactions are increasingly important in large and/or condensed-phase systems, but the current understanding of how well various models predict these interactions is limited. Here, benchmark complete-basis set coupled cluster singles, doubles, and perturbative triples (CCSD(T)) calculations have been performed to generate a new test set for three-body intermolecular interactions. This "3B-69" benchmark set includes three-body interaction energies for 69 total trimer structures, consisting of three structures from each of 23 different molecular crystals. By including structures that exhibit a variety of intermolecular interactions and packing arrangements, this set provides a stringent test for the ability of electronic structure methods to describe the correct physics involved in the interactions. Both MP2.5 (the average of second- and third-order Møller-Plesset perturbation theory) and spin-component-scaled CCSD for noncovalent interactions (SCS-MI-CCSD) perform well. MP2 handles the polarization aspects reasonably well, but it omits three-body dispersion. In contrast, many widely used density functionals corrected with three-body D3 dispersion correction perform comparatively poorly. The primary difficulty stems from the treatment of exchange and polarization in the functionals rather than from the dispersion correction, though the three-body dispersion may also be moderately underestimated by the D3 correction. PMID:26575743

  2. Benchmarking and regulation

    OpenAIRE

    Agrell, Per Joakim; Bogetoft, Peter

    2013-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publication...

  3. Methods of determining the effective dose in dental radiology.

    Science.gov (United States)

    Thilander-Klang, Anne; Helmrot, Ebba

    2010-01-01

    A wide variety of X-ray equipment is used today in dental radiology, including intra-oral, orthopantomographic, cephalometric, cone-beam computed tomography (CBCT) and computed tomography (CT). This raises the question of how the radiation risks resulting from different kinds of examinations should be compared. The risk to the patient is usually expressed in terms of effective dose. However, it is difficult to determine its reliability, and it is difficult to make comparisons, especially when different modalities are used. The classification of the new CBCT units is also problematic as they are sometimes classified as CT units. This will lead to problems in choosing the best dosimetric method, especially when the examination geometry resembles more on an ordinary orthopantomographic examination, as the axis of rotation is not at the centre of the patient, and small radiation field sizes are used. The purpose of this study was to present different methods for the estimation of the effective dose from the equipment currently used in dental radiology, and to discuss their limitations. The methods are compared based on commonly used measurable and computable dose quantities, and their reliability in the estimation of the effective dose. PMID:20211918

  4. Methods of determining the effective dose in dental radiology

    International Nuclear Information System (INIS)

    A wide variety of X-ray equipment is used today in dental radiology, including intra-oral, ortho-pan-tomographic, cephalo-metric, cone-beam computed tomography (CBCT) and computed tomography (CT). This raises the question of how the radiation risks resulting from different kinds of examinations should be compared. The risk to the patient is usually expressed in terms of effective dose. However, it is difficult to determine its reliability, and it is difficult to make comparisons, especially when different modalities are used. The classification of the new CBCT units is also problematic as they are sometimes classified as CT units. This will lead to problems in choosing the best dosimetric method, especially when the examination geometry resembles more on an ordinary ortho-pan-tomographic examination, as the axis of rotation is not at the centre of the patient, and small radiation field sizes are used. The purpose of this study was to present different methods for the estimation of the effective dose from the equipment currently used in dental radiology, and to discuss their limitations. The methods are compared based on commonly used measurable and computable dose quantities, and their reliability in the estimation of the effective dose. (authors)

  5. Benchmarking of software and methods for use in transient multidimensional fuel performance with spatial reactor kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Banfield, J. E. [Dept. of Nuclear Engineering, Univ. of Tennessee, Knoxville, TN 37996-2300 (United States); Clarno, K. T.; Hamilton, S. P. [Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Maldonado, G. I. [Dept. of Nuclear Engineering, Univ. of Tennessee, Knoxville, TN 37996-2300 (United States); Philip, B.; Baird, M. L. [Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2012-07-01

    The key physics involved in accurate prediction of reactor-fuel-element behavior includes neutron transport and thermal hydraulics. The thermal hydraulic feedback mechanism is primarily provided through cross sections to the neutron transport that are temperature and density dependent. Historically, this coupling was primarily seen only in reactor simulators, which are well suited to model the reactor core, giving only a coarse treatment to individual fuel pins as well as simple models for thermal distribution calculations. This poor resolution on the primary coupling mechanisms can lead to conservatisms that should be removed to improve fuel design and performance. This work seeks to address the resolution of space-time-dependent neutron kinetics with thermal feedback within the fuel pin scale in the multi-physics framework. The specific application of this new capability is transient performance analysis of space-time-dependent temperature distribution of fuel elements. The coupling between the neutron transport and the thermal feedback is extremely important in this highly coupled problem, primarily applicable to reactivity-initiated- accidents (RIAs) and loss-of-coolant-accidents (LOCAs). The capability developed will include the coupling of the time-dependent neutron transport with the time-dependent thermal diffusion capability. An improvement in resolution and coupling is proposed by developing neutron transport models that are internally coupled with high fidelity within fuel pin thermal calculations in a multi-physics framework. Good agreement is shown with benchmarks and problems from the literature of RIAs and LOCAs for the tools used. (authors)

  6. Benchmarking of Software and Methods for Use in Transient Multidimensional Fuel Performance with Spatial Reactor Kinetics

    International Nuclear Information System (INIS)

    The key physics involved in accurate prediction of reactor-fuel-element behavior includes neutron transport and thermal hydraulics. The thermal hydraulic feedback mechanism is primarily provided through cross sections to the neutron transport that are temperature and density dependent. Historically, this coupling was primarily seen only in reactor simulators, which are well suited to model the reactor core, giving only a coarse treatment to individual fuel pins as well as simple models for thermal distribution calculations. This poor resolution on the primary coupling mechanisms can lead to conservatisms that should be removed to improve fuel design and performance. This work seeks to address the resolution of space-time-dependent neutron kinetics with thermal feedback within the fuel pin scale in the multiphysics framework. The specific application of this new capability is transient performance analysis of space-time-dependent temperature distribution of fuel elements. The coupling between the neutron transport and the thermal feedback is extremely important in this highly coupled problem, primarily applicable to reactivity-initiated-accidents (RIAs) and loss-of-coolant-accidents (LOCAs). The capability developed will include the coupling of the time-dependent neutron transport with the time-dependent thermal diffusion capability. An improvement in resolution and coupling is proposed by developing neutron transport models that are internally coupled with high fidelity within fuel pin thermal calculations in a multiphysics framework. Good agreement is shown with benchmarks and problems from the literature of RIAs and LOCAs for the tools used.

  7. Benchmarking of software and methods for use in transient multidimensional fuel performance with spatial reactor kinetics

    International Nuclear Information System (INIS)

    The key physics involved in accurate prediction of reactor-fuel-element behavior includes neutron transport and thermal hydraulics. The thermal hydraulic feedback mechanism is primarily provided through cross sections to the neutron transport that are temperature and density dependent. Historically, this coupling was primarily seen only in reactor simulators, which are well suited to model the reactor core, giving only a coarse treatment to individual fuel pins as well as simple models for thermal distribution calculations. This poor resolution on the primary coupling mechanisms can lead to conservatisms that should be removed to improve fuel design and performance. This work seeks to address the resolution of space-time-dependent neutron kinetics with thermal feedback within the fuel pin scale in the multi-physics framework. The specific application of this new capability is transient performance analysis of space-time-dependent temperature distribution of fuel elements. The coupling between the neutron transport and the thermal feedback is extremely important in this highly coupled problem, primarily applicable to reactivity-initiated- accidents (RIAs) and loss-of-coolant-accidents (LOCAs). The capability developed will include the coupling of the time-dependent neutron transport with the time-dependent thermal diffusion capability. An improvement in resolution and coupling is proposed by developing neutron transport models that are internally coupled with high fidelity within fuel pin thermal calculations in a multi-physics framework. Good agreement is shown with benchmarks and problems from the literature of RIAs and LOCAs for the tools used. (authors)

  8. Comparison of dose calculation methods for brachytherapy of intraocular tumors

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, Mark J.; Chiu-Tsao, Sou-Tung; Finger, Paul T.; Meigooni, Ali S.; Melhus, Christopher S.; Mourtada, Firas; Napolitano, Mary E.; Rogers, D. W. O.; Thomson, Rowan M.; Nath, Ravinder [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Quality MediPhys LLC, Denville, New Jersey 07834 (United States); New York Eye Cancer Center, New York, New York 10065 (United States); Department of Radiation Oncology, Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States); Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Department of Radiation Physics, University of Texas, M.D. Anderson Cancer Center, Houston, Texas 77030 (United States) and Department of Experimental Diagnostic Imaging, University of Texas, M.D. Anderson Cancer Center, Houston, Texas 77030 (United States); Physics, Elekta Inc., Norcross, Georgia 30092 (United States); Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 (Canada); Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut 06520 (United States)

    2011-01-15

    Purpose: To investigate dosimetric differences among several clinical treatment planning systems (TPS) and Monte Carlo (MC) codes for brachytherapy of intraocular tumors using {sup 125}I or {sup 103}Pd plaques, and to evaluate the impact on the prescription dose of the adoption of MC codes and certain versions of a TPS (Plaque Simulator with optional modules). Methods: Three clinical brachytherapy TPS capable of intraocular brachytherapy treatment planning and two MC codes were compared. The TPS investigated were Pinnacle v8.0dp1, BrachyVision v8.1, and Plaque Simulator v5.3.9, all of which use the AAPM TG-43 formalism in water. The Plaque Simulator software can also handle some correction factors from MC simulations. The MC codes used are MCNP5 v1.40 and BrachyDose/EGSnrc. Using these TPS and MC codes, three types of calculations were performed: homogeneous medium with point sources (for the TPS only, using the 1D TG-43 dose calculation formalism); homogeneous medium with line sources (TPS with 2D TG-43 dose calculation formalism and MC codes); and plaque heterogeneity-corrected line sources (Plaque Simulator with modified 2D TG-43 dose calculation formalism and MC codes). Comparisons were made of doses calculated at points-of-interest on the plaque central-axis and at off-axis points of clinical interest within a standardized model of the right eye. Results: For the homogeneous water medium case, agreement was within {approx}2% for the point- and line-source models when comparing between TPS and between TPS and MC codes, respectively. For the heterogeneous medium case, dose differences (as calculated using the MC codes and Plaque Simulator) differ by up to 37% on the central-axis in comparison to the homogeneous water calculations. A prescription dose of 85 Gy at 5 mm depth based on calculations in a homogeneous medium delivers 76 Gy and 67 Gy for specific {sup 125}I and {sup 103}Pd sources, respectively, when accounting for COMS-plaque heterogeneities. For off

  9. Shutdown dose rate assessment with the Advanced D1S method: Development, applications and validation

    Energy Technology Data Exchange (ETDEWEB)

    Villari, R., E-mail: rosaria.villari@enea.it [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Fischer, U. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Moro, F. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Pereslavtsev, P. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Petrizzi, L. [European Commission, DG Research and Innovation K5, CDMA 00/030, B-1049 Brussels (Belgium); Podda, S. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Serikov, A. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2014-10-15

    Highlights: Development of Advanced-D1S for shutdown dose rate calculations; Recent applications of the tool to tokamaks; Summary of the results of benchmarking with measurements and R2S calculations; Limitations and further development. Abstract: The present paper addresses the recent developments and applications of Advanced-D1S to the calculations of shutdown dose rate in tokamak devices. Results of benchmarking with measurements and Rigorous 2-Step (R2S) calculations are summarized and discussed as well as limitations and further developments. The outcomes confirm the essential role of the Advanced-D1S methodology and the evidence for its complementary use with the R2Smesh approach for the reliable assessment of shutdown dose rates and related statistical uncertainties in present and future fusion devices.

  10. Shutdown dose rate assessment with the Advanced D1S method: Development, applications and validation

    International Nuclear Information System (INIS)

    Highlights: •Development of Advanced-D1S for shutdown dose rate calculations. •Recent applications of the tool to tokamaks. •Summary of the results of benchmarking with measurements and R2S calculations. •Limitations and further development. -- Abstract: The present paper addresses the recent developments and applications of Advanced-D1S to the calculations of shutdown dose rate in tokamak devices. Results of benchmarking with measurements and Rigorous 2-Step (R2S) calculations are summarized and discussed as well as limitations and further developments. The outcomes confirm the essential role of the Advanced-D1S methodology and the evidence for its complementary use with the R2Smesh approach for the reliable assessment of shutdown dose rates and related statistical uncertainties in present and future fusion devices

  11. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Suwazono, Yasushi, E-mail: suwa@faculty.chiba-u.jp [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nogawa, Kazuhiro; Uetani, Mirei [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nakada, Satoru [Safety and Health Organization, Chiba University, 1-33 Yayoicho, Inageku, Chiba 263-8522 (Japan); Kido, Teruhiko [Department of Community Health Nursing, Kanazawa University School of Health Sciences, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan); Nakagawa, Hideaki [Department of Epidemiology and Public Health, Kanazawa Medical University, 1-1 Daigaku, Uchnada, Ishikawa 920-0293 (Japan)

    2011-02-15

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and {beta}2-microglobulin ({beta}2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for {beta}2-MG was 3.5 {mu}g/g creatinine in men and 3.7 {mu}g/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  12. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    International Nuclear Information System (INIS)

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and β2-microglobulin (β2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for β2-MG was 3.5 μg/g creatinine in men and 3.7 μg/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  13. TORT solutions to the NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space

    International Nuclear Information System (INIS)

    We present the TORT solutions to the 3D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40 x 40 x 40, 200 angles) to the finest model (160 x 160 x 160, 800 angles). The MCNP reference solution is used for evaluating the effect of model-refinement on the accuracy of the TORT solutions. The presented results show that the majority of benchmark quantities are computed with good accuracy by TORT, and that the accuracy improves with model refinement. However, this deliberately severe test has exposed some deficiencies in both deterministic and stochastic solution approaches. Specifically, TORT fails to converge the inner iterations in some benchmark configurations while MCNP produces zero tallies, or drastically poor statistics for some benchmark quantities. We conjecture that TORT's failure to converge is driven by ray effects in configurations with low scattering ratio and/or highly skewed computational cells, i.e. aspect ratio far from unity. The failure of MCNP occurs in quantities tallied over a very small area or volume in physical space, or quantities tallied many (∼25) mean free paths away from the source. Hence automated, robust, and reliable variance reduction techniques are essential for obtaining high quality reference values of the benchmark quantities. Preliminary results of the benchmark exercise indicate that the occasionally poor performance of TORT is shared with other deterministic codes. Armed with this information, method developers can now direct their attention to regions in parameter space where such failures occur and design alternative solution approaches for such instances

  14. Quality assurance methods in the German dose rate measurement network

    International Nuclear Information System (INIS)

    The result of the gamma dose rate measurements in the context of the surveillance of environmental radioactivity depends strongly on the physical properties of the counting probes, but also on meteorological effects and the characteristics of the site in the vicinity of the dose rate probes. In the German gamma dose rate measurement network (ODL-monitoring net) substantial quality assurance efforts have been undertaken to ensure that the measured data are representative. These are, in particular, measures to determine the specific physical properties of the deployed monitors (background count rate, dependence on the cosmic radiation), an individual on-site test of the detector efficiency in the context of so-called ''method of repetitive tests'', methods for on-line correlation of precipitation and dose rate display and description of the monitor environment in context of the background capability as well as the validity in case of a contamination situation. Respective investigations have been performed in recent years and, based on the substantial amount of data, an optimisation of the practicability of the measurement data for decision support systems derived. (orig.)

  15. OECD EGBUC Benchmark VIII. Comparison of calculation codes and methods for the analysis of small-sample reactivity experiments

    International Nuclear Information System (INIS)

    Small-sample reactivity experiments are relevant to provide accurate information on the integral cross sections of materials. One of the specificities of these experiments is that the measured reactivity worth generally ranges between 1 and 10 pcm, which precludes the use of Monte Carlo for the analysis. As a consequence, several papers have been devoted to deterministic calculation routes, implying spatial and/or energetic discretization which could involve calculation bias. Within the Expert Group on Burn-Up Credit of the OECD/NEA, a benchmark was proposed to compare different calculation codes and methods for the analysis of these experiments. In four Sub-Phases with geometries ranging from a single cell to a full 3D core model, participants were asked to evaluate the reactivity worth due to the addition of small quantities of separated fission products and actinides into a UO2 fuel. Fourteen institutes using six different codes have participated in the Benchmark. For reactivity worth of more than a few tens of pcm, the Monte-Carlo approach based on the eigen-value difference method appears clearly as the reference method. However, in the case of reactivity worth as low as 1 pcm, it is concluded that the deterministic approach based on the exact perturbation formalism is more accurate and should be preferred. Promising results have also been reported using the newly available exact perturbation capability, developed in the Monte Carlo code TRIPOLI4, based on the calculation of a continuous energy adjoint flux in the reference situation, convoluted to the forward flux of the perturbed situation. (author)

  16. Comparison between calculation methods of dose rates in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)

  17. Method of simulation of low dose rate for total dose effect in 0.18 {mu}m CMOS technology

    Energy Technology Data Exchange (ETDEWEB)

    He Baoping; Yao Zhibin; Guo Hongxia; Luo Yinhong; Zhang Fengqi; Wang Yuanming; Zhang Keying, E-mail: baopinghe@126.co [Northwest Institute of Nuclear Technology, Xi' an 710613 (China)

    2009-07-15

    Three methods for simulating low dose rate irradiation are presented and experimentally verified by using 0.18 {mu}m CMOS transistors. The results show that it is the best way to use a series of high dose rate irradiations, with 100 {sup 0}C annealing steps in-between irradiation steps, to simulate a continuous low dose rate irradiation. This approach can reduce the low dose rate testing time by as much as a factor of 45 with respect to the actual 0.5 rad (Si)/s dose rate irradiation. The procedure also provides detailed information on the behavior of the test devices in a low dose rate environment.

  18. Method of simulation of low dose rate for total dose effect in 0.18 μm CMOS technology

    International Nuclear Information System (INIS)

    Three methods for simulating low dose rate irradiation are presented and experimentally verified by using 0.18 μm CMOS transistors. The results show that it is the best way to use a series of high dose rate irradiations, with 100 0C annealing steps in-between irradiation steps, to simulate a continuous low dose rate irradiation. This approach can reduce the low dose rate testing time by as much as a factor of 45 with respect to the actual 0.5 rad (Si)/s dose rate irradiation. The procedure also provides detailed information on the behavior of the test devices in a low dose rate environment.

  19. Benchmark Calculations on the Atomization Enthalpy,Geometry and Vibrational Frequencies of UF6 with Relativistic DFT Methods

    Institute of Scientific and Technical Information of China (English)

    XIAO Hai; LI Jun

    2008-01-01

    Benchmark calculations on the molar atomization enthalpy, geometry, and vibrational frequencies of uranium hexafluoride (UF6) have been performed by using relativistic density functional theory (DFT) with various levels of relativistic effects, different types of basis sets, and exchange-correlation functionals. Scalar relativistic effects are shown to be critical for the structural properties. The spin-orbit coupling effects are important for the calculated energies, but are much less important for other calculated ground-state properties of closed-shell UF6. We conclude through systematic investigations that ZORA- and RECP-based relativistic DFT methods are both appropriate for incorporating relativistic effects. Comparisons of different types of basis sets (Slater, Gaussian, and plane-wave types) and various levels of theoretical approximation of the exchange-correlation functionals were also made.

  20. Iterative methods for dose reduction and image enhancement in tomography

    Science.gov (United States)

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  1. A novel method of estimating effective dose from the point dose method: a case study—parathyroid CT scans

    International Nuclear Information System (INIS)

    The purpose of this study was to validate a novel approach of applying a partial volume correction factor (PVCF) using a limited number of MOSFET detectors in the effective dose (E) calculation. The results of the proposed PVCF method were compared to the results from both the point dose (PD) method and a commercial CT dose estimation software (CT-Expo). To measure organ doses, an adult female anthropomorphic phantom was loaded with 20 MOSFET detectors and was scanned using the non-contrast and 2 phase contrast-enhanced parathyroid imaging protocols on a 64-slice multi-detector computed tomography scanner. E was computed by three methods: the PD method, the PVCF method, and the CT-Expo method. The E (in mSv) for the PD method, the PVCF method, and CT-Expo method was 2.6  ±  0.2, 1.3  ±  0.1, and 1.1 for the non-contrast scan, 21.9  ±  0.4, 13.9  ±  0.2, and 14.6 for the 1st phase of the contrast-enhanced scan, and 15.5  ±  0.3, 9.8  ±  0.1, and 10.4 for the 2nd phase of the contrast-enhanced scan, respectively. The E with the PD method differed from the PVCF method by 66.7% for the non-contrast scan, by 44.9% and by 45.5% respectively for the 1st and 2nd phases of the contrast-enhanced scan. The E with PVCF was comparable to the results from the CT-Expo method with percent differences of 15.8%, 5.0%, and 6.3% for the non-contrast scan and the 1st and 2nd phases of the contrast-enhanced scan, respectively. To conclude, the PVCF method estimated E within 16% difference as compared to 50–70% in the PD method. In addition, the results demonstrate that E can be estimated accurately from a limited number of detectors. (paper)

  2. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole;

    2012-01-01

    biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...

  3. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method.

    Science.gov (United States)

    Lutsker, V; Aradi, B; Niehaus, T A

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply the method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data. PMID:26567646

  4. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    International Nuclear Information System (INIS)

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply the method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data

  5. Benchmark exercise

    International Nuclear Information System (INIS)

    The motivation to conduct this benchmark exercise, a summary of the results, and a discussion of and conclusions from the intercomparison are given in Section 5.2. This section contains further details of the results of the calculations and intercomparisons, illustrated by tables and figures, but avoiding repetition of Section 5.2 as far as possible. (author)

  6. Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data

    DEFF Research Database (Denmark)

    Clausen, Philip T. L. C.; Zankari, Ea; Aarestrup, Frank Møller;

    2016-01-01

    two different methods in current use for identification of antibiotic resistance genes in bacterial WGS data. A novel method, KmerResistance, which examines the co-occurrence of k-mers between the WGS data and a database of resistance genes, was developed. The performance of this method was compared...... with two previously described methods; ResFinder and SRST2, which use an assembly/BLAST method and BWA, respectively, using two datasets with a total of 339 isolates, covering five species, originating from the Oxford University Hospitals NHS Trust and Danish pig farms. The predicted resistance was...... compared with the observed phenotypes for all isolates. To challenge further the sensitivity of the in silico methods, the datasets were also down-sampled to 1% of the reads and reanalysed. The best results were obtained by identification of resistance genes by mapping directly against the raw reads. This...

  7. Analysis of Cumulative Dose to Implanted Pacemaker According to Various IMRT Delivery Methods: Optimal Dose Delivery Versus Dose Reduction Strategy

    International Nuclear Information System (INIS)

    Cancer patients with implanted cardiac pacemaker occasionally require radiotherapy. Pacemaker may be damaged or malfunction during radiotherapy due to ionizing radiation or electromagnetic interference. Although radiotherapy should be planned to keep the dose to pacemaker as low as possible not to malfunction ideally, current radiation treatment planning (RTP) system does not accurately calculate deposited dose to adjacent field border or area beyond irradiated fields. In terms of beam delivery techniques using multiple intensity modulated fields, dosimetric effect of scattered radiation in high energy photon beams is required to be detailed analyzed based on measurement data. The aim of this study is to evaluate dose discrepancies of pacemaker in a RTP system as compared to measured doses. We also designed dose reduction strategy limited value of 2 Gy for radiation treatment patients with cardiac implanted pacemaker. Total accumulated dose of 145 cGy based on in-vivo dosimetry was satisfied with the recommendation criteria to prevent malfunction of pacemaker in SS technique. However, the 2 mm lead shielder enabled the scattered doses to reduce up to 60% and 40% in the patient and the phantom, respectively. The SS technique with the lead shielding could reduce the accumulated scattered doses less than 100 cGy. Calculated and measured doses were not greatly affected by the beam delivery techniques. In-vivo and measured doses on pacemaker position showed critical dose discrepancies reaching up to 4 times as compared to planned doses in RTP. The current SS technique could deliver lower scattered doses than recommendation criteria, but use of 2 mm lead shielder contributed to reduce scattered doses by 60%. The tertiary lead shielder can be useful to prevent malfunction or electrical damage of implanted pacemakers during radiotherapy. It is required to estimate more accurate scattered doses of the patient or medical device in RTP to design proper dose reduction strategy.

  8. Fast neutron flux calculation benchmark analysis of PWR pressure vessel based on 3D MC-SN coupled method

    International Nuclear Information System (INIS)

    The Monte Carlo (MC)-discrete ordinates (SN) coupled method is an efficient approach to solve shielding calculations of nuclear device with complex geometries and deep penetration. The 3D MC-SN coupled method has been used for PWR shielding calculation for the first time. According to characteristics of NUREG/CR-6115 PWR model, the thermal shield is specified as the common surface to link the Monte Carlo complex geometrical model and the deep penetration SN model. 3D Monte Carlo code is employed to accurately simulate the structure from core to thermal shield. The neutron tracks crossing the thermal shield inner surface are recorded by MC code. The SN boundary source is generated by the interface program and used by the 3D SN code to treat the calculation from thermal shield to pressure vessel. The calculation results include the circular distributions of fast neutron flux at pressure vessel inner wall, pressure vessel T/4 and lower weld locations. The calculation results are performed with comparison to MCNP and DORT solutions of benchmark report and satisfactory agreements are obtained. The validity of the method and the correctness of the programs are proved. (authors)

  9. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    OpenAIRE

    Wahlberg Malin; Pázsit Imre

    2006-01-01

    The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade) is generated by recoil production. Both constant and energy dependent cross-sections ...

  10. Component-wise partitioned explicit finite element method: Benchmark tests for linear wave propagation in solids

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Cho, S.S.; Park, K.C.

    Atheny : National Technical University of Athens, 2015 - (Papadrakakis, M.; Papadopoulos, V.). C 620 ISBN 978-960-99994-7-2. [International Conference on Computational Method s in Structural Dynamics and Earthquake Engineering /5./. 25.05.2015-27.05.2015, Crete] R&D Projects: GA ČR(CZ) GAP101/12/2315; GA TA ČR(CZ) TH01010772 Institutional support: RVO:61388998 Keywords : wave propagation * spurious oscillations * finite element method Subject RIV: BI - Acoustics

  11. Computer–based method of bite mark analysis: A benchmark in forensic dentistry?

    Science.gov (United States)

    Pallam, Nandita Kottieth; Boaz, Karen; Natrajan, Srikant; Raj, Minu; Manaktala, Nidhi; Lewis, Amitha J.

    2016-01-01

    Aim: The study aimed to determine the technique with maximum accuracy in production of bite mark overlay. Materials and Methods: Thirty subjects (10 males and 20 females; all aged 20–30 years) with complete set of natural upper and lower anterior teeth were selected for this study after obtaining approval from the Institutional Ethical Committee. The upper and lower alginate impressions were taken and die stone models were obtained from each impression; overlays were produced from the biting surfaces of six upper and six lower anterior teeth by hand tracing from study casts, hand tracing from wax impressions of the bite surface, radiopaque wax impression method, and xerographic method. These were compared with the original overlay produced digitally. Results: Xerographic method was the most accurate of the four techniques, with the highest reproducibility for bite mark analysis. The methods of wax impression were better for producing overlay of tooth away from the occlusal plane. Conclusions: Various techniques are used in bite mark analysis and the choice of technique depends largely on personal preference. No single technique has been shown to be better than the others and very little research has been carried out to compare different methods. This study evaluated the accuracy of direct comparisons between suspect's models and bite marks with indirect comparisons in the form of conventional traced overlays of suspects and found the xerographic technique to be the best. PMID:27051221

  12. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  13. Dosing method of physical activity in aerobics classes for students

    Directory of Open Access Journals (Sweden)

    Beliak Yu.I.

    2014-10-01

    Full Text Available Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years. In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumbbells weighing 1kg increase in the rate of musical accompaniment, varying heights step platform. Results . on the basis of the relationship between heart rate and oxygen consumption was determined by the energy cost of each admission control load intensity. This indicator has been used to justify the intensity and duration of multiplicity aerobics. Figure correspond to the level of physical condition and motor activity deficits students. Conclusions : the estimated component of this method of dosing load makes it convenient for use in automated computer programs. Also it can be easily modified to dispense load other types of recreational fitness.

  14. Benchmark verification of a method for calculating leakage from partial-length shield assembly modified cores

    International Nuclear Information System (INIS)

    Over the past several years, plant-life extension programs have been implemented at many U.S. plants. One method of pressure vessel (PV) fluence rate reduction being used in several of the older reactors involves partial replacement of the oxide fuel with metallic rods in those peripheral assemblies located at critical azimuths. This substitution extends axially over a region that depends on the individual plant design, but covers the most critical PV weld and plate locations, which may be subject to pressurized thermal shock. In order to analyze the resulting PV dosimetry using these partial-length shield assemblies (PLSA), a relatively simple but accurate method needs to be formulated and qualified that treats the axially asymmetric core leakage. Accordingly, an experiment was devised and performed at the VENUS critical facility in Mol, Belgium. The success of the proposed method bodes well for the accuracy of future analyses of on-line plants using PLSAs

  15. CCSDTQ interaction energies as a benchmark for CCSDT-level methods

    Czech Academy of Sciences Publication Activity Database

    Hobza, Pavel; Řezáč, Jan; Šimová, Lucia

    New Orleans: American Chemical Society, 2013. 31PHYS. ISSN 0065-7727. [National Spring Meeting of the American Chemical Society /245./. 07.04.2013-11.04.2013, New Orleans] Institutional support: RVO:61388963 Keywords : CCSDTQ * interaction energies * CCSDT- level methods Subject RIV: CF - Physical ; Theoretical Chemistry

  16. PSA methods for technical specifications: insight gained from the reliability benchmark exercises and from the development of computerised support systems

    International Nuclear Information System (INIS)

    This paper describes the philosophy, the objectives and the lesson leaned from the Reliability Benchmark Exercises (RBE), organized by the Joint Research Center (JRC) Ispra of the Commission of the European Communities, and carried out over some years within a worldwide community of users and developers of Probabilistic Safety Assessment (PSA) methods and applications. The causes of uncertainties and the importance of the modelling uncertainties, revealed by the exercises, lead to a variety of observations also on the use of reliability methods for the definition of the technical specifications, including the limiting conditions for operation, the requirements of surveillance testing, the safety system set point limits and the administrative controls. In particular, it is argued that the use of PSA techniques, as source of information for a safe operability of the plant, requires validated system models which might be better achieved by means of computerised analysis tools. These are helpful both in the design phase and during operations, when the operator or the surveyor has to define, case by case, the boundary conditions for the case at hand. In this sense, the study and the development of computerised analysis tools is being developed within the JRC Ispra with the objective of ameliorating and exploiting further the application of appropriate reliability analyses of plants. The results so far obtained are presented and finally the perspectives of this work are discussed in terms of advantages, needs and characteristics of the information system for the optimization of the plant management and control

  17. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    International Nuclear Information System (INIS)

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput

  18. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H. [Utah Univ., Salt Lake City, UT (United States)

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  19. Complex absorbing potentials within EOM-CC family of methods: Theory, implementation, and benchmarks

    International Nuclear Information System (INIS)

    A production-level implementation of equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) for electron attachment and excitation energies augmented by a complex absorbing potential (CAP) is presented. The new method enables the treatment of metastable states within the EOM-CC formalism in a similar manner as bound states. The numeric performance of the method and the sensitivity of resonance positions and lifetimes to the CAP parameters and the choice of one-electron basis set are investigated. A protocol for studying molecular shape resonances based on the use of standard basis sets and a universal criterion for choosing the CAP parameters are presented. Our results for a variety of π* shape resonances of small to medium-size molecules demonstrate that CAP-augmented EOM-CCSD is competitive relative to other theoretical approaches for the treatment of resonances and is often able to reproduce experimental results

  20. Computer–based method of bite mark analysis: A benchmark in forensic dentistry?

    OpenAIRE

    Nandita Kottieth Pallam; Karen Boaz; Srikant Natrajan; Minu Raj; Nidhi Manaktala; Lewis, Amitha J

    2016-01-01

    Aim: The study aimed to determine the technique with maximum accuracy in production of bite mark overlay. Materials and Methods: Thirty subjects (10 males and 20 females; all aged 20–30 years) with complete set of natural upper and lower anterior teeth were selected for this study after obtaining approval from the Institutional Ethical Committee. The upper and lower alginate impressions were taken and die stone models were obtained from each impression; overlays were produced from the biting ...

  1. Component-wise partitioned finite element method in linear wave propagation problems: benchmark tests

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Cho, S.S.; Červ, Jan; Park, K.C.

    Praha : Institute of Thermomechanics AS CR, 2014 - (Zolotarev, I.; Pešek, L.), s. 31-36 ISBN 978-80-87012-54-3. [DYMAMESI 2014. Praha (CZ), 25.11.2014-26.11.2014] R&D Projects: GA ČR(CZ) GAP101/11/0288 Institutional support: RVO:61388998 Keywords : stress wave propagation * finite element method * explicit time integrator * spurious oscillations * stress discontinuities Subject RIV: JR - Other Machinery

  2. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    International Nuclear Information System (INIS)

    The purpose of this paper is to demonstrate the use of the invariant embedding method in a series of model transport problems, for which it is also possible to obtain an analytical solution. Due to the non-linear character of the embedding equations, their solution can only be obtained numerically. However, this can be done via a robust and effective iteration scheme. In return, the domain of applicability is far wider than the model problems investigated in this paper. The use of the invariant embedding method is demonstrated in three different areas. The first is the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production. Both constant and energy dependent cross sections with a power law dependence were used in the calculations. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel and unexpected application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and a half-space are interrelated through embedding-like integral equations, by the solution of which the reflected flux from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases the invariant embedding method proved to be robust, fast and monotonically converging to the exact solutions. (authors)

  3. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    Directory of Open Access Journals (Sweden)

    Wahlberg Malin

    2006-01-01

    Full Text Available The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade is generated by recoil production. Both constant and energy dependent cross-sections with a power law dependence were treated. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and in a half-space are interrelated through embedding-like integral equations, by the solution of which the flux reflected from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases, the invariant embedding method proved to be robust, fast, and monotonically converging to the exact solutions.

  4. International comparison of criticality accident evaluation methods. Evaluation plan of super-critical benchmark based on TRACY experiment

    International Nuclear Information System (INIS)

    In order to evaluate criticality accident analysis codes, a criticality accident benchmark problem was made based on the TRACY experiment. It is evaluated by the contributors of the expert group on criticality excursion analysis, a group of criticality safety WP of OECD/NEA/NSC. This paper reports the detail of TRACY Benchmark I and II, and preliminary results of its analysis using AGNES code. (author)

  5. What is the best practice for benchmark regulation of electricity distribution? Comparison of DEA, SFA and StoNED methods

    International Nuclear Information System (INIS)

    Electricity distribution is a natural local monopoly. In many countries, the regulators of this sector apply frontier methods such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) to estimate the efficient cost of operation. In Finland, a new StoNED method was adopted in 2012. This paper compares DEA, SFA and StoNED in the context of regulating electricity distribution. Using data from Finland, we compare the impacts of methodological choices on cost efficiency estimates and acceptable cost. While the efficiency estimates are highly correlated, the cost targets reveal major differences. In addition, we examine performance of the methods by Monte Carlo simulations. We calibrate the data generation process (DGP) to closely match the empirical data and the model specification of the regulator. We find that the StoNED estimator yields a root mean squared error (RMSE) of 4% with the sample size 100. Precision improves as the sample size increases. The DEA estimator yields an RMSE of approximately 10%, but performance deteriorates as the sample size increases. The SFA estimator has an RMSE of 144%. The poor performance of SFA is due to the wrong functional form and multicollinearity. - Highlights: • We compare DEA, SFA and StoNED methods in the context of regulation of electricity distribution. • Both empirical comparisons and Monte Carlo simulations are presented. • Choice of benchmarking method has a significant economic impact on the regulatory outcomes. • StoNED yields the most precise results in the Monte Carlo simulations. • Five lessons concerning heterogeneity, noise, frontier, simulations, and implementation

  6. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    This report is a compilation of the information submitted by AECL, CIAE, JAERI, ORNL and Siemens in response to a need identified at the 'Workshop on R and D Needs' at the IGORR-3 meeting. The survey compiled information on the national standards applied to the Safety Quality Assurance (SQA) programs undertaken by the participants. Information was assembled for the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods used to verify and validate the codes and libraries. Although the survey was not comprehensive, it provides a basis for exchanging information of common interest to the research reactor community

  7. Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules III: A Benchmark of GW Methods.

    Science.gov (United States)

    Knight, Joseph W; Wang, Xiaopeng; Gallandi, Lukas; Dolgounitcheva, Olga; Ren, Xinguo; Ortiz, J Vincent; Rinke, Patrick; Körzdörfer, Thomas; Marom, Noa

    2016-02-01

    The performance of different GW methods is assessed for a set of 24 organic acceptors. Errors are evaluated with respect to coupled cluster singles, doubles, and perturbative triples [CCSD(T)] reference data for the vertical ionization potentials (IPs) and electron affinities (EAs), extrapolated to the complete basis set limit. Additional comparisons are made to experimental data, where available. We consider fully self-consistent GW (scGW), partial self-consistency in the Green's function (scGW0), non-self-consistent G0W0 based on several mean-field starting points, and a "beyond GW" second-order screened exchange (SOSEX) correction to G0W0. We also describe the implementation of the self-consistent Coulomb hole with screened exchange method (COHSEX), which serves as one of the mean-field starting points. The best performers overall are G0W0+SOSEX and G0W0 based on an IP-tuned long-range corrected hybrid functional with the former being more accurate for EAs and the latter for IPs. Both provide a balanced treatment of localized vs delocalized states and valence spectra in good agreement with photoemission spectroscopy (PES) experiments. PMID:26731609

  8. Statistical methods used for code-to-code comparisons in the OECD/NRC PWR MSLB benchmark

    International Nuclear Information System (INIS)

    The ongoing pressurized water reactor (PWR) main steam line break (MSLB) benchmark problem, sponsored by the Office for Economic Cooperation and Development (OECD), the United States Nuclear Regulatory Commission (US NRC), and the Pennsylvania State University (PSU) consists of three exercises, whose combined purpose is to verify the capability of system codes to analyze complex transients with coupled core/plant interactions; to test fully the 3D neutronics/thermal-hydraulic coupling; and to evaluate discrepancies between the predictions of coupled codes in best-estimate transient simulations. Exercise two is intended to test core response to imposed system thermal-hydraulic conditions. For this exercise, the participants are provided with transient boundary conditions and two cross-section libraries. Results are submitted for six steady-state cases and two transient scenarios. The boundary conditions, the details for each case, and the output requested are described in the final specifications for the benchmark problem. To fully analyze the data for comparison in the final report, a suite of statistical methods has been developed, to serve as a reference in the absence of experimental data. A corrected arithmetical mean and standard deviation are calculated for all data types: single-value parameters, 1D axial distributions, 2D radial distributions, and time histories. Each participant's deviation from the mean and a corresponding figure-of-merit are reported for the purposes of comparison and discussion. Selected mean values and standard deviations are presented in this paper for several parameters at specific points of interest: for the initial steady-state 2, at hot-full power, radial and axial power distributions are presented, along with effective multiplication factor, power peaking factors, and axial offset. For the snapshot taken at the time of highest return-to-power in transient Scenario 2, parameters presented include axial and radial power

  9. Benchmark experiments for nuclear data

    International Nuclear Information System (INIS)

    Benchmark experiments offer the most direct method for validation of nuclear data. Benchmark experiments for several areas of application of nuclear data were specified by CSEWG. These experiments are surveyed and tests of recent versions of ENDF/B are presented. (U.S.)

  10. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    Directory of Open Access Journals (Sweden)

    Claudio Ferone

    2014-08-01

    Full Text Available Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM. Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration various aspects: thermal properties at high temperature, the actual geometry of the repeated storage element and the actual storage cycle adopted. High-performance thermal storage materials from the literatures have been tested and used here as reference benchmarks. Other materials tested are lightweight concretes with recycled aggregates and a geopolymer concrete. Their thermal properties have been measured and used as inputs in the numerical model to preliminarily evaluate their application in thermal storage. The analysis carried out can also be used to optimize the storage system, in terms of thermal properties required to the storage material. The results showed a significant influence of the thermal properties on the performances of the storage elements. Simulation results have provided information for further scale-up from a single differential storage element to the entire module as a function of material thermal properties.

  11. A track length estimator method for dose calculations in low-energy X-ray irradiations. Implementation, properties and performance

    International Nuclear Information System (INIS)

    The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.

  12. A track length estimator method for dose calculations in low-energy X-ray irradiations. Implementation, properties and performance

    Energy Technology Data Exchange (ETDEWEB)

    Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics

    2015-05-01

    The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.

  13. A track length estimator method for dose calculations in low-energy X-ray irradiations: implementation, properties and performance.

    Science.gov (United States)

    Baldacci, F; Mittone, A; Bravin, A; Coan, P; Delaire, F; Ferrero, C; Gasilov, S; Létang, J M; Sarrut, D; Smekens, F; Freud, N

    2015-03-01

    The track length estimator (TLE) method, an "on-the-fly" fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 10(3), with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams. PMID:24973309

  14. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  15. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm-1) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques

  16. Absorbed dose determination in photon fields using the tandem method

    International Nuclear Information System (INIS)

    The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF2: Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with 90Sr-90Y, calibrated with the energy of 60Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than 5%. The reason of the answers of the CaF2: Dy and LiF: Mg, Ti, according to the energy of the radiation, allows us to establish the effective energy of photons and the absorbed dose, with a margin of error of less than 10% and 20% respectively

  17. Selecting benchmarks for reactor calculations

    International Nuclear Information System (INIS)

    Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and thereby resulting in a user bias in the process. In this paper we present a method for the selection of these benchmarks for reactor applications and uncertainty reduction based on Total Monte Carlo (TMC) method. Similarities between an application case and one or several benchmarks are quantified using the correlation coefficient. Based on the method, we also propose two approaches for reducing nuclear data uncertainty using integral benchmark experiments as an additional constrain in the TMC method: a binary accept/reject method and a method of uncertainty reduction using weights. Finally, the methods were applied to a full Lead Fast Reactor core and a set of criticality benchmarks. (author)

  18. The Dutch Hospital Standardised Mortality Ratio (HSMR) method and cardiac surgery: benchmarking in a national cohort using hospital administration data versus a clinical database

    OpenAIRE

    Siregar, S.; Pouw, M E; Moons, K G M; Versteegh, M. I. M.; Bots, M. L.; van der Graaf, Y; Kalkman, C.J.; van Herwerden, L.A.; Groenwold, R. H. H.

    2013-01-01

    Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted f...

  19. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    Science.gov (United States)

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PMID:21587191

  20. The variance-covariance method: Microdosimetry in time-varying low dose-rate radiation fields

    OpenAIRE

    Breckow, Joachim; Wenning, A.; Roos, H; Kellerer, Albrecht M.

    1988-01-01

    The variance-covariance method is employed at low doses and in radiation fields of low dose rates from an241Am (4 nGy/s) and a90Sr (300 nGy/s) source. The preliminary applications and results illustrate some of the potential of the method, and show that the dose average of lineal energy or energy imparted can be determined over a wide range of doses and dose rates. The dose averages obtained with the variance-covariance method in time-varying fields, for which the conventional variance method...

  1. CCF benchmark test

    International Nuclear Information System (INIS)

    A benchmark test on common cause failures (CCF) was performed giving interested institutions in Germany the opportunity of demonstrating and justifying their interpretations of events, their methods and models for analyzed CCF. The participants of this benchmark test belonged to expert and consultant organisations and to industrial institutions. The task for the benchmark test was to analyze two typical groups of motor-operated valves in German nuclear power plants. The benchmark test was carried out in two steps. In the first step the participants were to assess in a qualitative way some 200 event-reports on isolation valves. They then were to establish, quantitatively, the reliability parameters for the CCF in the two groups of motor-operated valves using their own methods and their own calculation models. In a second step the reliability parameters were to be recalculated on the basis of a common reference of well defined events, chosen from all given events, in order to analyze the influence of the calculation models on the reliability parameters. (orig.)

  2. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    International Nuclear Information System (INIS)

    Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CFSSDEorgan) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CFSSDEorgan were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCFSSDEorgan were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CFSSDEorgan, was compared to previously published pediatric patient doses that

  3. A phantom based method for deriving typical patient doses from measurements of dose-area product on populations of patients

    International Nuclear Information System (INIS)

    One of the chief sources of uncertainty in the comparison of patient dosimetry data is the influence of patient size on dose. Dose has been shown to relate closely to the equivalent diameter of the patient. This concept has been used to derive a prospective, phantom based method for determining size correction factors for measurements of dose-area product. The derivation of the size correction factor has been demonstrated mathematically, and the appropriate factor determined for a number of different X-ray sets. The use of phantom measurements enables the effect of patient size to be isolated from other factors influencing patient dose. The derived factors agree well with those determined retrospectively from patient dose survey data. Size correction factors have been applied to the results of a large scale patient dose survey, and this approach has been compared with the method of selecting patients according to their weight. For large samples of data, mean dose-area product values are independent of the analysis method used. The chief advantage of using size correction factors is that it allows all patient data to be included in a survey, whereas patient selection has been shown to exclude approximately half of all patients. (author)

  4. Intercomparison of the finite difference and nodal discrete ordinates and surface flux transport methods for a LWR pool-reactor benchmark problem in X-Y geometry

    International Nuclear Information System (INIS)

    The aim of the present work is to compare and discuss the three of the most advanced two dimensional transport methods, the finite difference and nodal discrete ordinates and surface flux method, incorporated into the transport codes TWODANT, TWOTRAN-NODAL, MULTIMEDIUM and SURCU. For intercomparison the eigenvalue and the neutron flux distribution are calculated using these codes in the LWR pool reactor benchmark problem. Additionally the results are compared with some results obtained by French collision probability transport codes MARSYAS and TRIDENT. Because the transport solution of this benchmark problem is close to its diffusion solution some results obtained by the finite element diffusion code FINELM and the finite difference diffusion code DIFF-2D are included

  5. Benchmark calculations of power distribution within fuel assemblies. Phase 2: comparison of data reduction and power reconstruction methods in production codes

    International Nuclear Information System (INIS)

    Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)

  6. Benchmark ab Initio Conformational Energies for the Proteinogenic Amino Acids through Explicitly Correlated Methods. Assessment of Density Functional Methods.

    Science.gov (United States)

    Kesharwani, Manoj K; Karton, Amir; Martin, Jan M L

    2016-01-12

    The relative energies of the YMPJ conformer database of the 20 proteinogenic amino acids, with N- and C-termination, have been re-evaluated using explicitly correlated coupled cluster methods. Lower-cost ab initio methods such as MP2-F12 and CCSD-F12b actually are outperformed by double-hybrid DFT functionals; in particular, the DSD-PBEP86-NL double hybrid performs well enough to serve as a secondary standard. Among range-separated hybrids, ωB97X-V performs well, while B3LYP-D3BJ does surprisingly well among traditional DFT functionals. Treatment of dispersion is important for the DFT functionals; for the YMPJ set, D3BJ generally works as well as the NL nonlocal dispersion functional. Basis set sensitivity for DFT calculations on these conformers is weak enough that def2-TZVP is generally adequate. For conformer corrections to heats of formation, B3LYP-D3BJ and especially DSD-PBEP86-D3BJ or DSD-PBEP86-NL are adequate for all but the most exacting applications. The revised geometries and energetics for the YMPJ database have been made available as Supporting Information and should be useful in the parametrization and validation of molecular mechanics force fields and other low-cost methods. The very recent dRPA75 method yields good performance, without resorting to an empirical dispersion correction, but is still outperformed by DSD-PBEP86-D3BJ and particularly DSD-PBEP86-NL. Core-valence corrections are comparable in importance to improvements beyond CCSD(T*)/cc-pVDZ-F12 in the valence treatment. PMID:26653705

  7. Dose conversion factors for radiation doses at normal operation discharges. F. Methods report

    International Nuclear Information System (INIS)

    A study has been performed in order to develop and extend existing models for dose estimations at emissions of radioactive substances from nuclear facilities in Sweden. This report gives a review of the different exposure pathways that have been considered in the study. Radioecological data that should be used in calculations of radiation doses are based on the actual situation at the nuclear sites. Dose factors for children have been split in different age groups. The exposure pathways have been carefully re-examined, like the radioecological data; leading to some new pathways (e.g. doses from consumption of forest berries, mushrooms and game) for cesium and strontium. Carbon 14 was given a special treatment by using a model for uptake of carbon by growing plants. For exposure from aquatic emissions, a simplification was done by focussing on the territory for fish species, since consumption of fish is the most important pathway

  8. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  9. Absorbed dose determination in photon fields using the tandem method

    CERN Document Server

    Marques-Pachas, J F

    1999-01-01

    The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF sub 2 : Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with sup 9 sup 0 Sr- sup 9 sup 0 Y, calibrated with the energy of sup 6 sup 0 Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than ...

  10. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Purpose/Objective: The potential for on-line error detection using electronic portal images (EPIs) has stimulated the investigation of computer-based methods for matching portal images with reference or 'gold standard' images. The lack of absolute truth for clinical images is a major obstacle to the evaluation of these methods. The purpose of this investigation was to create a set of realistic test EPIs with known setup errors for use as a benchmark for evaluation and intercomparison of computer-based methods, including automatic and user-guided techniques, for EPI analysis. Materials and Methods: Digitally reconstructed electronic portal images (DREPIs) were computed using the visible male CT data set from the National Library of Medicine (NLM). (DREPIs are computed using high energy attenuation coefficients to simulate megavoltage images.) The NLM CT data set comprises 512x512x1 mm contiguous slices from the tip of the head to below the knees. The subject was frozen and scanned very soon after non-traumatizing death, and thus the visualized anatomy closely resembles that of a living person, but without breathing and other motion artifacts. Also since dose was not a consideration the signal-to-noise ratio is higher compared with typical 1 mm slices obtained on a living person. Because of the quality of the CT data, the quality of the DREPIs had to be degraded, and modified in other ways, to create realistic test cases. Modifications included: 1) contrast histogram matching to actual EPIs, 2) addition of structured noise by blending an 'open field' EPI image with the DREPI, 3) addition of random unstructured noise, and 4) Gaussian blurring to simulate patient motion and head scatter effects. (It is important to note that there is no standard appearance or quality for EPIs. The appearance of EPIs is quite variable, especially across EPIDs from different manufacturers. Even for a given system, EPIs are quite sensitive to system calibration and acquisition parameters

  11. BN-600 hybrid core benchmark analyses. Results from a coordinated research project on updated codes and methods to reduce the calculational uncertainties of the LMFR reactivity effects

    International Nuclear Information System (INIS)

    To those Member States who have or have had significant fast reactor development programmes, it is of the utmost importance to have validated up-to-date codes and methods for fast reactor core physics analysis in support of R and D activities in the area of actinide utilization and incineration. They have recently focused on fast reactor systems for minor actinide transmutation and on cores optimized for consuming rather than breeding plutonium; the physics of the breeder reactor cycle having already been widely investigated. Plutonium burning systems may have an important role in managing plutonium stocks until the time when major programmes of self-sufficient fast breeder reactors are established. For assessing the safety of these systems it is important to determine the prediction accuracy of transient simulations and their associated reactivity coefficients. In response to Member States' expressed interest, the IAEA sponsored a Coordinated Research Project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. This CRP was started in November 1999 and at the first meeting the members of the CRP endorsed a benchmark on the BN-600 hybrid core for consideration in its first studies. Benchmark analyses of the BN-600 hybrid core were performed during the first three phases of the CRP investigating different nuclear data and levels of approximations in the calculation of, safety related reactivity effects and their influence on uncertainties in transient analysis predictions. In an additional phase of the benchmark studies experimental data was used for the validation and verification of nuclear data libraries and methods in support of the previous three phases. This report presents the results of the benchmark analyses of the hybrid UOX/MOX fuelled BN-600 reactor core. The aim of this report is to contribute to the reduction in uncertainties associated with reactivity coefficients and their influence on LMFR

  12. Ambient dose assessment around TRACY using deterministic methods

    International Nuclear Information System (INIS)

    Ambient dose was measured in the Transient Experiment Critical Facility (TRACY) supercritical experiments. In the analyses, The DORT code, the ANISN code and the MCNP code were used. Ambient dose equivalent calculated with DORT and ANSIN were compared to results calculated with MCNP. So we found that ambient dose equivalents calculated with DORT and ANISN, is larger than ones of MCNP, by 7∼50%. As a cause of this difference, we estimate that it is the difference of calculated source distribution inside the fuel solution, and that it is reflecting effect in wall. In following study, examination concerning this point is necessary. (author)

  13. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  14. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  15. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  16. A BENCHMARK PROGRAM FOR EVALUATION OF METHODS FOR COMPUTING SEISMIC RESPONSE OF COUPLED BUILDING-PIPING/EQUIPMENT WITH NON-CLASSICAL DAMPING

    International Nuclear Information System (INIS)

    Under the auspices of the US Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with nonclassical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were analyzed for a suite of earthquakes by program participants applying their uniquely developed methods and computer programs. This paper presents the results of their analyses, and their comparison to the benchmark solutions generated by BNL using time domain direct integration methods. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems

  17. Estimation of benchmark dose as the threshold levels of urinary cadmium, based on excretion of total protein, β 2-microglobulin, and N-acetyl-β-D-glucosaminidase in cadmium nonpolluted regions in Japan

    International Nuclear Information System (INIS)

    Previously, we investigated the association between urinary cadmium (Cd) concentration and indicators of renal dysfunction, including total protein, β 2-microglobulin (β 2-MG), and N-acetyl-β-D-glucosaminidase (NAG). In 2778 inhabitants ≥50 years of age (1114 men, 1664 women) in three different Cd nonpolluted areas in Japan, we showed that a dose-response relationship existed between renal effects and Cd exposure in the general environment without any known Cd pollution. However, we could not estimate the threshold levels of urinary Cd at that time. In the present study, we estimated the threshold levels of urinary Cd as the benchmark dose low (BMDL) using the benchmark dose (BMD) approach. Urinary Cd excretion was divided into 10 categories, and an abnormality rate was calculated for each. Cut-off values for urinary substances were defined as corresponding to the 84% and 95% upper limit values of the target population who have not smoked. Then we calculated the BMD and BMDL using a log-logistic model. The values of BMD and BMDL for all urinary substances could be calculated. The BMDL for the 84% cut-off value of β 2-MG, setting an abnormal value at 5%, was 2.4 μg/g creatinine (cr) in men and 3.3 μg/g cr in women. In conclusion, the present study demonstrated that the threshold level of urinary Cd could be estimated in people living in the general environment without any known Cd-pollution in Japan, and the value was inferred to be almost the same as that in Belgium, Sweden, and China

  18. Practical methods of dose reduction to the bladder wall

    International Nuclear Information System (INIS)

    The radiation dose to the bladder wall following the administration of radionuclides to patients can be reduced by a factor between 25 percent and 75 percent when the effective half-life for the radioactivity entering the urine is two hours or less. A significant but smaller reduction in dose to the gonads may also be achieved in situations where the major fraction of the administered activity is rapidly excreted in the urine. This reduction in dose is achieved by ensuring that the patient has between 50 and 150 ml of urine in his bladder when the radioactivity is injected, and is encouraged to void between one and two hours after the activity has been administered. The interrelationship of voiding schedule, effective half-life, initial urine volume, and demand urination has been analyzed in these studies. In addition, the significance of the rate of urine production and volume of urine in the bladder on the radiation dose to the bladder is demonstrated

  19. Method for the determination of neutron dose values

    International Nuclear Information System (INIS)

    The albedo dosimeter is used for measuring the dose equivalent of fast neutrons without space-dependent correction factors and independently of the neutron energy and the direction of beam incidence. It consists of LiF or TLD 600 and 700 detectors arranged in a cup-shaped shell of baron-containing material and separated by plates of the same material. One of the detectors is only for measuring the thermal and intermediate neutrons emitted by the body, while another detector measures the thermal and intermediate neutrons retained in the body. An additional third detector enables the measurement of a dose value of all neutrons except the thermal neutrons. The dose values are correlated in a calculation and corrected by means of correction factors from calibration diagrams, and the exact dose values are determined from this. (DG)

  20. CALIBRATION METHODS OF A CONSTITUTIVE MODEL FOR PARTIALLY SATURATED SOILS: A BENCHMARKING EXERCISE WITHIN THE MUSE NETWORK

    OpenAIRE

    D'Onza, Francesca

    2008-01-01

    The paper presents a benchmarking exercise comparing different procedures, adopted by seven different teams of constitutive modellers, for the determination of parameter values in the Barcelona Basic Model, which is an elasto-plastic model for unsaturated soils. Each team is asked to determine a set of parameter values based on the same laboratory test data. The different set of parameters are then employed to simulate soil behaviour along a variety of stress paths. The results are finally co...

  1. Digital Breast Tomosynthesis: Comparison of Different Methods to Calculate Patient Doses

    International Nuclear Information System (INIS)

    Different methods have been proposed in the literature to calculate the dose to the patient's breast in 3-D mammography. The methods described by Dance et al. and Sechopoulos et al. have been compared in this study using the two tomosynthesis systems available in the authors' hospitals (Siemens and Hologic). There is a small but significant difference of 23% for the first X ray system and 13% for the second system between dose calculations performed with Dance's method and Sechopoulos' method. These differences are mainly due to the fact that the two sets of authors used different breast models for their Monte Carlo calculations. For each system, the calculated breast doses were compared with the dose values indicated on the system console. Good agreement was found when the method of Dance et al. was used for a breast glandularity based on the patient age. For the Siemens system, the calculated doses were 5% lower than the indicated dose and for the Hologic system, the calculated doses were 12% higher. Finally, the 3-D dose values were compared with the doses found in a large 2-D dosimetry study. The dose values for tomosynthesis on the Siemens system were almost double the doses in one view 2-D digital mammography. For a typical breast of thickness 45 mm, the dose of one 2-D view was 0.83 mGy and for one 3-D view 1.79 mGy. (author)

  2. Application of cytogenetic methods for estimation of absorbed dose

    International Nuclear Information System (INIS)

    Accumulated data on the practical application of cytogenetic technique to evaluate the absorbed dose for men involved in activities to eliminate the effects of the Chernobyl NPP accident were analyzed. Those data were compared with the results of cytogenetic studies conducted in other Russia regions affected by radiation impacts (Muslyumovo settle., Chelyabinsk Region, the Altay Territory settlements near the Semipalatinsk test range) and with the examination results of population of the territory of the Three Mile Island NPP (Island, Pennsylvania, USA) where in 1975 the nuclear accident took place. The cytogenetic studies were carried out using the standard analysis technique evaluating the frequency of unstable aberrations of chromosomes (UA) and using FISH-technique designed to evaluate the frequency of stable aberrations of chromosomes. It was pointed out that UA-technique could not be used efficiently for the retrospective evaluation of the absorbed doses with no clear idea correlating the nature and the rate of elimination with cell life time, especially, in case of small doses of irradiation. Analysis of the stable translocation using FISH-technique enabled to evaluate the absorbed dose within 8-9 years following the accident. The range of the absorbed doses of the examined persons varied from the background ones up to 1 Gy

  3. Estimation of dose in irradiated chicken bone by ESR method

    International Nuclear Information System (INIS)

    The author studied the conditions needed to routinely estimate the radiation dose in chicken bone by repeated re-irradiation and measuring ESR signals. Chicken meat containing bone was γ-irradiated at doses of up to 3kGy, accepted as the commercially used dose. The results show that points in sample preparation and ESR measurement are as follows: Both ends of bone are cut off and central part of compact bone is used for experiment. To obtain accurate ESR spectrum, marrow should be scraped out completely. Sample bone fragments of 1-2mm particle size and ca.100mg are recommended to obtain stable and maximum signal. In practice, by re-irradiating up to 5kGy and extrapolating data of the signal intensity to zero using linear regression analysis, radiation dose is estimated. For example, in one experiment, estimated doses of chicken bones initially irradiated at 3.0kGy, 1.0kGy, 0.50kGy and 0.25kGy were 3.4kGy, 1.3kGy, 0.81kGy and 0.57kGy. (author)

  4. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  5. An energy transfer method for 4D Monte Carlo dose calculation

    OpenAIRE

    Siebers, Jeffrey V; Zhong, Hualiang

    2008-01-01

    This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy ...

  6. Experience with a new simple method for the determination of doses in computed tomography

    International Nuclear Information System (INIS)

    A previously published method for estimating patient doses in computed tomography which utilizes the concept of a centimeter section dose (CSD) and integral scatter factors (ISF's) has been extended by obtaining the CSD and ISF data from a simple series of phantom measurements. These measurements and the various stages required to arrive at the relevant CSDs and ISF data are discussed. In addition, a series of dose measurements have been performed on patients for a range of examination protocols. These measured doses at various positions within and outside the scanned area are compared with predicted doses obtained using the CSD method

  7. Benchmarking a new closed-form thermal analysis technique against a traditional lumped parameter, finite-difference method

    Energy Technology Data Exchange (ETDEWEB)

    Huff, K. D.; Bauer, T. H. (Nuclear Engineering Division)

    2012-08-20

    A benchmarking effort was conducted to determine the accuracy of a new analytic generic geology thermal repository model developed at LLNL relative to a more traditional, numerical, lumped parameter technique. The fast-running analytical thermal transport model assumes uniform thermal properties throughout a homogenous storage medium. Arrays of time-dependent heat sources are included geometrically as arrays of line segments and points. The solver uses a source-based linear superposition of closed form analytical functions from each contributing point or line to arrive at an estimate of the thermal evolution of a generic geologic repository. Temperature rise throughout the storage medium is computed as a linear superposition of temperature rises. It is modeled using the MathCAD mathematical engine and is parameterized to allow myriad gridded repository geometries and geologic characteristics [4]. It was anticipated that the accuracy and utility of the temperature field calculated with the LLNL analytical model would provide an accurate 'birds-eye' view in regions that are many tunnel radii away from actual storage units; i.e., at distances where tunnels and individual storage units could realistically be approximated as physical lines or points. However, geometrically explicit storage units, waste packages, tunnel walls and close-in rock are not included in the MathCAD model. The present benchmarking effort therefore focuses on the ability of the analytical model to accurately represent the close-in temperature field. Specifically, close-in temperatures computed with the LLNL MathCAD model were benchmarked against temperatures computed using geometrically-explicit lumped-parameter, repository thermal modeling technique developed over several years at ANL using the SINDAG thermal modeling code [5]. Application of this numerical modeling technique to underground storage of heat generating nuclear waste streams within the proposed YMR Site has been widely

  8. A new finite cloud method for calculating external exposure dose in a nuclear emergency

    International Nuclear Information System (INIS)

    A new finite cloud method (5/μ method) for calculating external exposure dose in a nuclear emergency is presented in this paper. The method calculates external exposure dose over a specially constructed three-dimensional columned space, whose underside center is the location of the receptor and underside radius and height are both five times mean free path of a gamma-photon. Then, the space is divided into many grid cells for integral to calculate external exposure dose (or dose rate). The calculation values of air external exposure dose rate conversion factors and air-absorbed dose rate conversion factors by the 5/μ method are accordant with the values presented in related references. Comparing with the discrete point approximation method (DPA) [USNRC, The MESORAD Dose Assessment Model. NUREG/CR-4000 Vol. 1, 1986] and the Nomogram method [USNRC, Nomogram for Evaluation of Doses from Finite Noble Gas Clouds, NUREG-0851, 1983], which are two traditional finite cloud methods for calculating external exposure dose, the 5/μ method has a distinct advantage of more fast calculation speed, which is very important in a nuclear emergency. What is more, the 5/μ method can be applied together with three-dimensional atmospheric dispersion models

  9. A new finite cloud method for calculating external exposure dose in a nuclear emergency

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.Y.; Ling, Y.S. E-mail: lingyongsheng00@mails.tsinghua.edu.cn; Shi, Z.Q

    2004-06-01

    A new finite cloud method (5/{mu} method) for calculating external exposure dose in a nuclear emergency is presented in this paper. The method calculates external exposure dose over a specially constructed three-dimensional columned space, whose underside center is the location of the receptor and underside radius and height are both five times mean free path of a gamma-photon. Then, the space is divided into many grid cells for integral to calculate external exposure dose (or dose rate). The calculation values of air external exposure dose rate conversion factors and air-absorbed dose rate conversion factors by the 5/{mu} method are accordant with the values presented in related references. Comparing with the discrete point approximation method (DPA) [USNRC, The MESORAD Dose Assessment Model. NUREG/CR-4000 Vol. 1, 1986] and the Nomogram method [USNRC, Nomogram for Evaluation of Doses from Finite Noble Gas Clouds, NUREG-0851, 1983], which are two traditional finite cloud methods for calculating external exposure dose, the 5/{mu} method has a distinct advantage of more fast calculation speed, which is very important in a nuclear emergency. What is more, the 5/{mu} method can be applied together with three-dimensional atmospheric dispersion models.

  10. A method of transferring G.T.S. benchmark value to survey area using electronic total station

    Digital Repository Service at National Institute of Oceanography (India)

    Ganesan, P.

    be used for obtaining geographical co-ordinates and elevation of every point sighted with respect to G.T.S. benchmark “A”. Electronic Total Station uses both an invisible Pulse Laser Diode for distance measurement and a visible Red Laser Beam as a... laser pointer to identify the measurement point at the center of the cross hair lines of the telescope. The instrument has a keyboard containing 24 keys, which makes it easier and quicker to key in codes and other alphanumeric characters. The output...

  11. Dose determination in irradiated chicken meat by ESR method

    International Nuclear Information System (INIS)

    In this work, the properties of the radicals produced in chicken bones have been investigated by ESR technique to determine the amount of dose applied to the chicken meats during the food irradiation. For this goal, the drumsticks from 6-8 weeks old chickens purchased from a local market were irradiated at dose levels of 0; 2; 4; 6; 8 and 10 kGy. Then, the ESR spectra of the powder samples prepared from the bones of the drumsticks have been investigated. Unirradiated chicken bones have been observed to show a weak ESR signal of single line character. CO-2 ionic radicals of axial symmetry with g=1.9973 and g=2.0025 were observed to be produced in irradiated samples which would give rise to a three peaks ESR spectrum. In addition, the signal intensities of the samples were found to depend linearly on the irradiation dose in the dose range of 0-10 kGy. The powder samples prepared from chicken leg bones cleaned from their meats and marrow and irradiated at dose levels of 1, 2, 3, 4, 5, 6, B, 10, 12,14, 16, 1B, 20 and 22 kGy were used to get the dose-response curve. It was found that this curve has biphasic character and that the dose yield was higher in the 12-1B kGy dose range and a decrease appears in this curve over 18 kGy. The radical produced in the bones were found to be the same whether the irradiation was performed after stripping the meat and removing the marrow from the bone or before the stripping. The ESR spectra of both irradiated and non irradiated samples were investigated in the temperature range of 100 K-450 K and changes in the ESR spectra of CO-2 radical have been studied. For non irradiated samples (controls). the signal intensities were found to decrease when the temperature was increased. The same investigation has been carried out for irradiated samples and it was concluded that the signal intensities relative to the peaks of the radical spectrum increase in the temperature range of 100 K-330 K, then they decrease over 330 K. The change in the

  12. Introduction of a new method of reporting UV dose

    International Nuclear Information System (INIS)

    Solar ultraviolet radiation (UVR) causes health effects that can be both negative (acute- sunburn, chronic exposure -skin cancers and cataracts), and positive (an important contributor to production of Vitamin D by the human body). The World Health Organisation recommends the UV Index as a tool to raise public awareness about exposure to solar UVR and the need to adopt protective measures. The UV Index shows instantaneous levels of UVR and can be reported as a forecast, or as measured, in real time and historically. The Cancer Council have supported this message with UV Alert, a collaboration with the Bureau of Meteorology, which issues UV Index forecasts, and the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA), which provides real-time actual UV Index data for major Australian population centres. One problem with reporting UV Index, has been trying to balance the messages of sun safety with the need for Australians to get some UV exposure to promote Vitamin D production. For people trying to achieve this balance, without getting burnt, dose information would be helpful. A weakness of the UV Index is that it does not indicate dose, which is related to both the UVR level and duration of exposure. It is the accumulated dose that is relevant for health effects. To enable decisions on when and for how long people can be outside without protection, ARPANSA is developing a webpage based on hourly actual and forecast models of Standard Erythemal Dose (SED). Two SEDs of UV exposure are sufficient to cause a fair skinned person to burn. For example, at eight SED units per hour, safe skin exposure is limited to less than 15 minutes, any longer and the usual sun safety measures apply. SEDs enable people to determine how much exposure they could, or have actually, received. One issue is developing a webpage that can be easily interpreted by the public. It is easy to provide too much information and make a display overly complex; therefore, suitable displays

  13. SU-E-T-280: Reconstructed Rectal Wall Dose Map-Based Verification of Rectal Dose Sparing Effect According to Rectum Definition Methods and Dose Perturbation by Air Cavity in Endo-Rectal Balloon

    International Nuclear Information System (INIS)

    Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea

  14. Benchmark study on fine-mode aerosol in a big urban area and relevant doses deposited in the human respiratory tract.

    Science.gov (United States)

    Avino, Pasquale; Protano, Carmela; Vitali, Matteo; Manigrasso, Maurizio

    2016-09-01

    It is well-known that the health effects of PM increase as particle size decreases: particularly, great concern has risen on the role of UltraFine Particles (UFPs). Starting from the knowledge that the main fraction of atmospheric aerosol in Rome is characterized by significant levels of PM2.5 (almost 75% of PM10 fraction is PM2.5), the paper is focused on submicron particles in such great urban area. The daytime/nighttime, work-/weekdays and cold/hot seasonal trends of submicron particles will be investigated and discussed along with NOx and total PAH drifts demonstrating the primary origin of UFPs from combustion processes. Furthermore, moving from these data, the total dose of submicron particles deposited in the respiratory system (i.e., head, tracheobronchial and alveolar regions in different lung lobes) has been estimated. Dosimeter estimates were performed with the Multiple-Path Particle Dosimetry model (MPPD v.2.1). The paper discusses the aerosol doses deposited in the respiratory system of individuals exposed in proximity of traffic. During traffic peak hours, about 6.6 × 10(10) particles are deposited into the respiratory system. Such dose is almost entirely made of UFPs. According to the greater dose estimated, right lung lobes are expected to be more susceptible to respiratory pathologies than left lobes. PMID:27325547

  15. METHODS AND HARDWARE OF DOSE OUTPUT VERIFICATION FOR DYNAMIC RADIOTHERAPY

    OpenAIRE

    Y. V. Tsitovich; A. I. Hmyrak; A. I. Tarutin; M. G. Kiselev

    2013-01-01

    The design of special verification phantom for dynamic radiotherapy checking is described. This phantom permits to insert the dose distribution cross-calibration before every days patients irradiation on Linac with RapidArc. Cross-calibration factor is defined by approximation of large number correction factors measured in phantom at different angles of gantry rotation and middle quantity calculation. The long range stability of all correction factors have been evaluated during checking of se...

  16. Recommended environmental dose calculation methods and Hanford-specific parameters

    International Nuclear Information System (INIS)

    This document was developed to support the Hanford Environmental Dose overview Panel (HEDOP). The Panel is responsible for reviewing all assessments of potential doses received by humans and other biota resulting from the actual or possible environmental releases of radioactive and other hazardous materials from facilities and/or operations belonging to the US Department of Energy on the Hanford Site in south-central Washington. This document serves as a guide to be used for developing estimates of potential radiation doses, or other measures of risk or health impacts, to people and other biota in the environs on and around the Hanford Site. It provides information to develop technically sound estimates of exposure (i.e., potential or actual) to humans or other biotic receptors that could result from the environmental transport of potentially harmful materials that have been, or could be, released from Hanford operations or facilities. Parameter values and information that are specific to the Hanford environs as well as other supporting material are included in this document

  17. Recommended environmental dose calculation methods and Hanford-specific parameters

    Energy Technology Data Exchange (ETDEWEB)

    Schreckhise, R.G.; Rhoads, K.; Napier, B.A.; Ramsdell, J.V. (Pacific Northwest Lab., Richland, WA (United States)); Davis, J.S. (Westinghouse Hanford Co., Richland, WA (United States))

    1993-03-01

    This document was developed to support the Hanford Environmental Dose overview Panel (HEDOP). The Panel is responsible for reviewing all assessments of potential doses received by humans and other biota resulting from the actual or possible environmental releases of radioactive and other hazardous materials from facilities and/or operations belonging to the US Department of Energy on the Hanford Site in south-central Washington. This document serves as a guide to be used for developing estimates of potential radiation doses, or other measures of risk or health impacts, to people and other biota in the environs on and around the Hanford Site. It provides information to develop technically sound estimates of exposure (i.e., potential or actual) to humans or other biotic receptors that could result from the environmental transport of potentially harmful materials that have been, or could be, released from Hanford operations or facilities. Parameter values and information that are specific to the Hanford environs as well as other supporting material are included in this document.

  18. The effects of anatomic resolution, respiratory variations and dose calculation methods on lung dosimetry

    Science.gov (United States)

    Babcock, Kerry Kent Ronald

    2009-04-01

    The goal of this thesis was to explore the effects of dose resolution, respiratory variation and dose calculation method on dose accuracy. To achieve this, two models of lung were created. The first model, called TISSUE, approximated the connective alveolar tissues of the lung. The second model, called BRANCH, approximated the lungs bronchial, arterial and venous branching networks. Both models were varied to represent the full inhalation, full exhalation and midbreath phases of the respiration cycle. To explore the effects of dose resolution and respiratory variation on dose accuracy, each model was converted into a CT dataset and imported into a Monte Carlo simulation. The resulting dose distributions were compared and contrasted against dose distributions from Monte Carlo simulations which included the explicit model geometries. It was concluded that, regardless of respiratory phase, the exclusion of the connective tissue structures in the CT representation did not significantly effect the accuracy of dose calculations. However, the exclusion of the BRANCH structures resulted in dose underestimations as high as 14% local to the branching structures. As lung density decreased, the overall dose accuracy marginally decreased. To explore the effects of dose calculation method on dose accuracy, CT representations of the lung models were imported into the Pinnacle 3 treatment planning system. Dose distributions were calculated using the collapsed cone convolution method and compared to those derived using the Monte Carlo method. For both lung models, it was concluded that the accuracy of the collapsed cone algorithm decreased with decreasing density. At full inhalation lung density, the collapsed cone algorithm underestimated dose by as much as 15%. Also, the accuracy of the CCC method decreased with decreasing field size. Further work is needed to determine the source of the discrepancy.

  19. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  20. Different intensity extension methods and their impact on entrance dose in breast radiotherapy: A study

    Directory of Open Access Journals (Sweden)

    Sankar A

    2009-01-01

    Full Text Available In breast radiotherapy, skin flashing of treatment fields is important to account for intrafraction movements and setup errors. This study compares the two different intensity extension methods, namely, Virtual Bolus method and skin flash tool method, to provide skin flashing in intensity modulated treatment fields. The impact of these two different intensity extension methods on skin dose was studied by measuring the entrance dose of the treatment fields using semiconductor diode detectors. We found no significant difference in entrance dose due to different methods used for intensity extension. However, in the skin flash tool method, selection of appropriate parameters is important to get optimum fluence extension.

  1. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  2. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Directory of Open Access Journals (Sweden)

    Akifumi S Tanabe

    Full Text Available Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need

  3. Two New Computational Methods for Universal DNA Barcoding: A Benchmark Using Barcode Sequences of Bacteria, Archaea, Animals, Fungi, and Land Plants

    Science.gov (United States)

    Tanabe, Akifumi S.; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to

  4. Solution of a stylized European Pressurized Reactor (EPR) benchmark problem using the coarse mesh radiation transport method (COMET)

    International Nuclear Information System (INIS)

    In this paper, as additional verification of its accuracy and efficiency, the coarse mesh radiation transport code COMET is compared to Monte Carlo solutions in a stylized benchmark problem based on the European Pressurized Reactor (EPR). The core specifications were taken directly from the Final Safety Analysis Report (FSAR) submitted to the Nuclear Regulatory Commission (NRC) and the reactor was modeled in a stylized manner while maintaining full heterogeneity at the pin and assembly level. Detailed results including assembly eigenvalues, core eigenvalues, and pin fission densities using a 2-group cross section library are presented. COMET results are in excellent agreement with MCNP with eigenvalue relative differences on the order of 10 pcm and average pin fission density relative differences on the order of 1-5%. Some maximum errors were on the order of 10% due to poor statistics on the periphery of the core in the reference results. (author)

  5. Patient and staff dose optimisation in nuclear medicine diagnosis methods

    International Nuclear Information System (INIS)

    , control of detector uniformity. The test for rotating gamma camera additionally demands controlling precision of rotation and image system resolution. The radioisotope and chemical purity of the radiopharmaceuticals are controlled, too. The process of 99mTc elution efficacity from 99Mo-generator is tested and the contents of 99Mo radioisotope in eluate is measured. The radioisotope diagnosis of brain, heart, thyroid, stomach, liver, kidney and bones as well as lymphoscintigraphy are performed. The procedure used for patient and staff's dose optimisation consists of: 1) control dose measurement performed with dosemeter on the tissue-like phantom including selected radiopharmaceutical of the same radioactivity as the one which will be applied to patient, 2) calculation of the patient dose rate, 3) calculation of the staff dose based on the results of personnel dosemeters (films or TLD), 4) preparation of the Quality Assurance instruction for the staff responsible for patient's safety. Independently of the patient and staff dose optimisation, the Quality Control of gamma camera equipments e.g. SPECT X-Ring Nucline (MEDISO) is checked for uniformity of the image from a radiopharmaceutical sample and center of rotation according to the producer's manual instruction. In addition, special lectures and courses for staff are organized several times per year to ensure a Continuous Professional Development (CPD) in the field of Quality Assurance and Quality Control.

  6. Modeling of tube current modulation methods in computed tomography dose calculations for adult and pregnant patients

    International Nuclear Information System (INIS)

    The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)

  7. Design study on dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    When we assume a severe accident in a nuclear power plant, it is required for rescue activity in the plant, accident management, repair work of failed parts and evaluation of employees to obtain radiation dose rate distribution or map in the plant and estimated dose value for the above works. However it might be difficult to obtain them accurately along the progress of the accident, because radiation monitors are not always installed in the areas where the accident management is planned or the repair work is thought for safety-related equipments. In this work, we analyzed diffusion of radioactive materials in case of a severe accident in a pressurized water reactor plant, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system by modeling a specific part of components and buildings in the plant from this design study on dose evaluation method for employees at severe accident, and then evaluated its availability. As a result, we obtained the followings: (1) A new dose evaluation method was established to predict the radiation dose rate in any point in the plant during a severe accident scenario. (2) This evaluation of total dose including moving route and time for the accident management and the repair work is useful for estimating radiation dose limit for these actions of the employees. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  8. Code intercomparison and benchmark for muon fluence and absorbed dose induced by an 18-GeV electron beam after massive iron shielding

    CERN Document Server

    Fasso, Alberto; Ferrari, Anna; Mokhov, Nikolai V; Mueller, Stefan E; Nelson, Walter Ralph; Roesler, Stefan; Sanami, Toshiya; Striganov, Sergei I; Versaci, Roberto

    2015-01-01

    In 1974, Nelson, Kase, and Svenson published an experimental investigation on muon shielding using the SLAC high energy LINAC. They measured muon fluence and absorbed dose induced by a 18 GeV electron beam hitting a copper/water beam dump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical mode ls available at the time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results will then be compared between the codes, and with the SLAC data.

  9. Detection system built from commercial integrated circuits for real-time measurement of radiation dose and quality using the variance method

    International Nuclear Information System (INIS)

    A small, specialised amplifier using commercial integrated circuits (ICs) was developed to measure radiation dose and quality in real time using a microdosimetric ion chamber and the variance method. The charges from a microdosimetric ion chamber, operated in the current mode, were repeatedly collected for a fixed period of time for 20 cycles of 100 integrations, and processed by this specialised amplifier to produce signal pulse heights between 0 and 10 V. These signals were recorded by a multi-channel analyser coupled to a computer. FORTRAN programs were written to calculate the dose and dose variance. The dose variance produced in the ion chamber is a microdosimetric measure of radiation quality. Benchmark measurements of different brands of ICs were conducted. Results demonstrate that this specialised amplifier is capable of distinguishing differences of radiation quality in various high-dose-rate radiation fields including X rays, gamma rays and mixed neutron-gamma radiation from the research reactor at Texas A and M Univ. (authors)

  10. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  11. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  12. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  13. A Privacy-Preserving Benchmarking Platform

    OpenAIRE

    Kerschbaum, Florian

    2010-01-01

    A privacy-preserving benchmarking platform is practically feasible, i.e. its performance is tolerable to the user on current hardware while fulfilling functional and security requirements. This dissertation designs, architects, and evaluates an implementation of such a platform. It contributes a novel (secure computation) benchmarking protocol, a novel method for computing peer groups, and a realistic evaluation of the first ever privacy-preserving benchmarking platform.

  14. Application of optical methods for dose evaluation in normoxic polyacrylamide gels irradiated at two different geometries

    International Nuclear Information System (INIS)

    Normoxic gels are frequently used in clinical praxis for dose assessment or 3-D dose imaging in radiotherapy due to their relative simple manufacturing process under normal atmospheric conditions, spatial stability and well expressed modification feature of physical properties which is related to radiation induced polymerization of gels. In this work we have investigated radiation induced modification of the optical properties of home prepared normoxic polyacrylamide gels (nPAG) in relation to polymerization processes that occur in irradiated gels. Two irradiation geometries were used for irradiation of gel samples: broad beam irradiation geometry of teletherapy unit ROKUS-M with a 60Co source and point source irradiation geometry using 192Ir source of high dose rate afterloading brachytherapy unit MicroSelectron v2 which was inserted into gel via 6 Fr (2 mm thick) catheter. Verification of optical methods: UV–VIS spectrometry, spectrophotometry, Raman spectroscopy for dose assessment in irradiated gels has been performed. Aspects of their application for dose evaluation in gels irradiated using different geometries are discussed. Simple pixel-dose based photometry method also has been proposed and evaluated as a potential method for dose evaluation in catheter based interstitial high dose rate brachytherapy. - Highlights: • Radiation induced volume based polymerization propagation in nPAG gels is different for broad beam and point source irradiation geometry. Dose assessment in gels irradiated in broad beam geometry and point source geometry using different optical methods is method sensitive. • Simple pixel-dose based photoimaging method for dose verification in catheter based interstitial brachytherapy is of advantage

  15. Application of optical methods for dose evaluation in normoxic polyacrylamide gels irradiated at two different geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adliene, D., E-mail: diana.adliene@ktu.lt; Jakstas, K.; Vaiciunaite, N.

    2014-03-21

    Normoxic gels are frequently used in clinical praxis for dose assessment or 3-D dose imaging in radiotherapy due to their relative simple manufacturing process under normal atmospheric conditions, spatial stability and well expressed modification feature of physical properties which is related to radiation induced polymerization of gels. In this work we have investigated radiation induced modification of the optical properties of home prepared normoxic polyacrylamide gels (nPAG) in relation to polymerization processes that occur in irradiated gels. Two irradiation geometries were used for irradiation of gel samples: broad beam irradiation geometry of teletherapy unit ROKUS-M with a 60Co source and point source irradiation geometry using 192Ir source of high dose rate afterloading brachytherapy unit MicroSelectron v2 which was inserted into gel via 6 Fr (2 mm thick) catheter. Verification of optical methods: UV–VIS spectrometry, spectrophotometry, Raman spectroscopy for dose assessment in irradiated gels has been performed. Aspects of their application for dose evaluation in gels irradiated using different geometries are discussed. Simple pixel-dose based photometry method also has been proposed and evaluated as a potential method for dose evaluation in catheter based interstitial high dose rate brachytherapy. - Highlights: • Radiation induced volume based polymerization propagation in nPAG gels is different for broad beam and point source irradiation geometry. • Dose assessment in gels irradiated in broad beam geometry and point source geometry using different optical methods is method sensitive. • Simple pixel-dose based photoimaging method for dose verification in catheter based interstitial brachytherapy is of advantage.

  16. A method of estimating conceptus doses resulting from multidetector CT examinations during all stages of gestation

    International Nuclear Information System (INIS)

    Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made

  17. Comparison of methods for calculating rectal dose after 125I prostate brachytherapy implants

    International Nuclear Information System (INIS)

    Purpose: To compare several different methods of calculating the rectal dose and examine how accurately they represent rectal dose surface area measurements and, also, their practicality for routine use. Methods and Materials: This study comprised 55 patients, randomly selected from 295 prostate brachytherapy patients implanted at the Vancouver Cancer Center between 1998 and 2000. All implants used a nonuniform loading of 0.33 mCi (NIST-99) 125I seeds and a prescribed dose of 144 Gy. Pelvic CT scans were obtained for each patient ∼30 days after implantation. For the purposes of calculating the rectal dose, several structures were contoured on the CT images: (1) a 1-mm-thick anterior rectal wall, (2) the anterior half rectum, and (3) the whole rectum. Point doses were also obtained along the anterior rectal surface. The thin wall contour provided a surrogate for a dose-surface histogram (DSH) and was our reference standard rectal dose measurement. Alternate rectal dose measurements (volume, surface area, and length of rectum receiving a dose of interest [DOI] of ≥144 Gy and 216 Gy, as well as point dose measures) were calculated using several methods (VariSeed software) and compared with the surrogate DSH measure (SADOI). Results: The best correlation with SA144Gy was the dose volumes (whole or anterior half rectum) (R = 0.949). The length of rectum receiving ≥144 Gy also correlated well with SA144Gy (R ≥0.898). Point dose measures, such as the average and maximal anterior dose, correlated poorly with SA144Gy (R ≤0.649). The 216-Gy measurements supported these results. In addition, dose-volume measurements were the most practical (∼6 min/patient), with our surrogate DSH the least practical (∼20 min/patient). Conclusion: Dose-volume measurements for the whole or anterior half rectum, because they were the most practical measures and best represented the DSH measurements, should be considered a standard method of reporting the rectal dose when

  18. Interpolation method for calculation of computed tomography dose from angular varying tube current

    International Nuclear Information System (INIS)

    The scope and magnitude of radiation dose from computed tomography (CT) examination has led to increased scrutiny and focus on accurate dose tracking. The use of tube current modulation (TCM) results complicates dose tracking by generating unique scans that are specific to the patient. Three methods of estimating the radiation dose from a CT examination that uses TCM are compared: using the average current for an entire scan, using the average current for each slice in the scan, and using an estimation of the angular variation of the dose contribution. To determine the impact of TCM on the radiation dose received, a set of angular weighting functions for each tissue of the body are derived by fitting a function to the relative dose contributions tabulated for the four principle exposure projections. This weighting function is applied to the angular tube current function to determine the organ dose contributions from a single rotation. Since the angular tube current function is not typically known, a method for estimating that function is also presented. The organ doses calculated using these three methods are compared to simulations that explicitly include the estimated TCM function. (authors)

  19. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  20. Closed-Loop Neuromorphic Benchmarks

    Science.gov (United States)

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  1. Accuracy of effective dose estimation in personal dosimetry: a comparison between single-badge and double-badge methods and the MOSFET method.

    Science.gov (United States)

    Januzis, Natalie; Belley, Matthew D; Nguyen, Giao; Toncheva, Greta; Lowry, Carolyn; Miller, Michael J; Smith, Tony P; Yoshizumi, Terry T

    2014-05-01

    The purpose of this study was three-fold: (1) to measure the transmission properties of various lead shielding materials, (2) to benchmark the accuracy of commercial film badge readings, and (3) to compare the accuracy of effective dose (ED) conversion factors (CF) of the U.S. Nuclear Regulatory Commission methods to the MOSFET method. The transmission properties of lead aprons and the accuracy of film badges were studied using an ion chamber and monitor. ED was determined using an adult male anthropomorphic phantom that was loaded with 20 diagnostic MOSFET detectors and scanned with a whole body CT protocol at 80, 100, and 120 kVp. One commercial film badge was placed at the collar and one at the waist. Individual organ doses and waist badge readings were corrected for lead apron attenuation. ED was computed using ICRP 103 tissue weighting factors, and ED CFs were calculated by taking the ratio of ED and badge reading. The measured single badge CFs were 0.01 (±14.9%), 0.02 (±9.49%), and 0.04 (±15.7%) for 80, 100, and 120 kVp, respectively. Current regulatory ED CF for the single badge method is 0.3; for the double-badge system, they are 0.04 (collar) and 1.5 (under lead apron at the waist). The double-badge system provides a better coefficient for the collar at 0.04; however, exposure readings under the apron are usually negligible to zero. Based on these findings, the authors recommend the use of ED CF of 0.01 for the single badge system from 80 kVp (effective energy 50.4 keV) data. PMID:24670903

  2. Analytical method for internal dose determination caused by chronically radionuclides inhalation to respiration system

    International Nuclear Information System (INIS)

    Analytical method for internal dose determination caused by chronically radionuclides inhalation to respiratory system with the constant rate of radionuclide concentration inhaled has been developed. The dose calculation is solved solved analytically using distribution and accumulation of radionuklida model in respiratory system. A computer program was then made to calculate internal dose in respiratory system easily and quickly. Computer program is arranged using Borland C++ 4.5 language. The value of internal dose on time t after inhalation depend on the radionuclides, the half time ,radionuclides AMAD, radionuclides class, radiation type, energy absorbed by respiratory organ, organ mass, the radionuclides concentration inhaled, the inhalation period

  3. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  4. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  5. Maximum skin dose assessment in interventional cardiology: large area detectors and calculation methods

    International Nuclear Information System (INIS)

    Advances in imaging technology have facilitated the development of increasingly complex radiological procedures for interventional radiology. Such interventional procedures can involve significant patient exposure, although often represent alternatives to more hazardous surgery or are the sole method for treatment. Interventional radiology is already an established part of mainstream medicine and is likely to expand further with the continuing development and adoption of new procedures. Between all medical exposures, interventional radiology is first of the list of the more expansive radiological practice in terms of effective dose per examination with a mean value of 20 mSv. Currently interventional radiology contribute 4% to the annual collective dose, in spite of contributing to total annual frequency only 0.3% but considering the perspectives of this method can be expected a large expansion of this value. In IR procedures the potential for deterministic effects on the skin is a risk to be taken into account together with stochastic long term risk. Indeed, the International Commission on Radiological Protection (ICRP) in its publication No 85, affirms that the patient dose of priority concern is the absorbed dose in the area of skin that receives the maximum dose during an interventional procedure. For the mentioned reasons, in IR it is important to give to practitioners information on the dose received by the skin of the patient during the procedure. In this paper maximum local skin dose (MSD) is called the absorbed dose in the area of skin receiving the maximum dose during an interventional procedure

  6. Beta-ray dose assessment from skin contamination using a point kernel method

    International Nuclear Information System (INIS)

    In this study, a point kernel method to calculate beta-ray dose rate from skin contamination was introduced. The beta-ray doses rates were computed by performing numerical integration of the radial dose distribution around an isotropic point source of monoenergetic electrons called as point kernel. In this study, in-house code, based on MATLAB version 7.0.4 was developed to perform a numerical integration. The code generated dose distributions for beta-ray emitters from interpolated point kernel, and beta-ray dose rates from skin contamination were calculated by numerical integration. Generated dose distributions for selected beta-ray emitters agreed with those calculated by Cross et al within 20%, except at a longer distance where there are differences up to more than 100%. For a point source, calculated beta-ray doses were agreed well with those derived from Monte Carlo simulation. For a disk source, the differences were up to 17% at a deep region. Point kernel method underestimated beta-ray doses than Monte Carlo simulation. The code will be improved to deal with a three-dimensional source, shielding by cover material, air gap and contribution of photon to skin dose. For the sake of user's convenience, the code will be equipped with graphic user interface. (author)

  7. Design study on dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    If a severe accident occurs in a pressurized water reactor plant, it is required to estimate dose values of operators engaged in emergency such as accident management, repair of failed parts. However, it might be difficult to measure radiation dose rate during the progress of an accident, because radiation monitors are not always installed in areas where the emergency activities are required. In this study, we analyzed the transport of radioactive materials in case of a severe accident, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system from this design study, and then evaluated its availability. As a result, we obtained the following: (1) A new dose evaluation method was established to predict the radiation dose rate at any point in the plant during a severe accident scenario. (2) This evaluation of total dose including access route and time for emergency activities is useful for estimating radiation dose limit for these employee actions. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  8. Method for recovery of thyroidal radiation dose due to 131I incorporation

    International Nuclear Information System (INIS)

    Method for retrospective recovery of the radiation dose in the thyroid of humans of dirrerent age groups due to 131I incroporation is developed. Method is based on the analysis of density of 137Cs fallout, dose of the mixture of desimented gamma-sources, and the measured adiation dose in the thyroid. The technique is developed using the available data on Chernobyl accident. Ckrrelations were found between the examined parameters in a wide range of the sedimented radionuclides concentrations. The resultant estimated dose dostribution in the thyroid virtually does not differ from that measured in the known settlements. Thyroid radiations doses in similar fallout density of 137Cs and in the same gamma-radiation doses vary by scores; the share of subjects with the maximal radiaiton doses makes up 0.01-0.005%. Highest value of the correlation factor of thyroid radiation dose fallout with the dose of external gamma-radiaiton was found within the risk 8 months after the accident

  9. Research Reactor Benchmarks

    International Nuclear Information System (INIS)

    A criticality benchmark experiment performed at the Jozef Stefan Institute TRIGA Mark II research reactor is described. This experiment and its evaluation are given as examples of benchmark experiments at research reactors. For this reason the differences and possible problems compared to other benchmark experiments are particularly emphasized. General guidelines for performing criticality benchmarks in research reactors are given. The criticality benchmark experiment was performed in a normal operating reactor core using commercially available fresh 20% enriched fuel elements containing 12 wt% uranium in uranium-zirconium hydride fuel material. Experimental conditions to minimize experimental errors and to enhance computer modeling accuracy are described. Uncertainties in multiplication factor due to fuel composition and geometry data are analyzed by sensitivity analysis. The simplifications in the benchmark model compared to the actual geometry are evaluated. Sample benchmark calculations with the MCNP and KENO Monte Carlo codes are given

  10. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  11. Liver tumour segmentation using contrast-enhanced multi-detector CT data: performance benchmarking of three semiautomated methods

    International Nuclear Information System (INIS)

    Automatic tumour segmentation and volumetry is useful in cancer staging and treatment outcome assessment. This paper presents a performance benchmarking study on liver tumour segmentation for three semiautomatic algorithms: 2D region growing with knowledge-based constraints (A1), 2D voxel classification with propagational learning (A2) and Bayesian rule-based 3D region growing (A3). CT data from 30 patients were studied, and 47 liver tumours were isolated and manually segmented by experts to obtain the reference standard. Four datasets with ten tumours were used for algorithm training and the remaining 37 tumours for testing. Three evaluation metrics, relative absolute volume difference (RAVD), volumetric overlap error (VOE) and average symmetric surface distance (ASSD), were computed based on computerised and reference segmentations. A1, A2 and A3 obtained mean/median RAVD scores of 17.93/10.53%, 17.92/9.61% and 34.74/28.75%, mean/median VOEs of 30.47/26.79%, 25.70/22.64% and 39.95/38.54%, and mean/median ASSDs of 2.05/1.41 mm, 1.57/1.15 mm and 4.12/3.41 mm, respectively. For each metric, we obtained significantly lower values of A1 and A2 than A3 (P < 0.01), suggesting that A1 and A2 outperformed A3. Compared with the reference standard, the overall performance of A1 and A2 is promising. Further development and validation is necessary before reliable tumour segmentation and volumetry can be widely used clinically. (orig.)

  12. Application of optical methods for dose evaluation in normoxic polyacrylamide gels irradiated at two different geometries

    Science.gov (United States)

    Adliene, D.; Jakstas, K.; Vaiciunaite, N.

    2014-03-01

    Normoxic gels are frequently used in clinical praxis for dose assessment or 3-D dose imaging in radiotherapy due to their relative simple manufacturing process under normal atmospheric conditions, spatial stability and well expressed modification feature of physical properties which is related to radiation induced polymerization of gels. In this work we have investigated radiation induced modification of the optical properties of home prepared normoxic polyacrylamide gels (nPAG) in relation to polymerization processes that occur in irradiated gels. Two irradiation geometries were used for irradiation of gel samples: broad beam irradiation geometry of teletherapy unit ROKUS-M with a 60Co source and point source irradiation geometry using 192Ir source of high dose rate afterloading brachytherapy unit MicroSelectron v2 which was inserted into gel via 6 Fr (2 mm thick) catheter. Verification of optical methods: UV-VIS spectrometry, spectrophotometry, Raman spectroscopy for dose assessment in irradiated gels has been performed. Aspects of their application for dose evaluation in gels irradiated using different geometries are discussed. Simple pixel-dose based photometry method also has been proposed and evaluated as a potential method for dose evaluation in catheter based interstitial high dose rate brachytherapy.

  13. The simple exposure dose calculation method in interventional radiology and one case of radiation injury (alopecia)

    International Nuclear Information System (INIS)

    Interventional radiology (IVR) is less invasive than surgery, and has rapidly become widespread due to advances in instruments and X-ray apparatuses. However, radiation exposure of long-time fluoroscopy induces the risk of radiation injury. We estimated the exposure dose in the patient who underwent IVR therapy and developed radiation injury (alopecia). The patient outcome and the method of estimating the exposure dose are reported. The estimation method of exposure dose was roughly estimated by real-time expose dose during exam. It is a useful indicator for the operator to know the exposure dose during IVR. We, radiological technologist must to know call attention to the role of radiological technicians during IVR. (author)

  14. A performance benchmark test for geodynamo simulations

    Science.gov (United States)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  15. Imaging method for monitoring delivery of high dose rate brachytherapy

    Science.gov (United States)

    Weisenberger, Andrew G; Majewski, Stanislaw

    2012-10-23

    A method for in-situ monitoring both the balloon/cavity and the radioactive source in brachytherapy treatment utilizing using at least one pair of miniature gamma cameras to acquire separate images of: 1) the radioactive source as it is moved in the tumor volume during brachytherapy; and 2) a relatively low intensity radiation source produced by either an injected radiopharmaceutical rendering cancerous tissue visible or from a radioactive solution filling a balloon surgically implanted into the cavity formed by the surgical resection of a tumor.

  16. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of Keff, control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  17. Monte Carlo photon benchmark problems

    International Nuclear Information System (INIS)

    Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs

  18. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  19. A CT-based analytical dose calculation method for HDR 192Ir brachytherapy

    International Nuclear Information System (INIS)

    Purpose: This article presents an analytical dose calculation method for high-dose-rate 192Ir brachytherapy, taking into account the effects of inhomogeneities and reduced photon backscatter near the skin. The adequacy of the Task Group 43 (TG-43) two-dimensional formalism for treatment planning is also assessed. Methods: The proposed method uses material composition and density data derived from computed tomography images. The primary and scatter dose distributions for each dwell position are calculated first as if the patient is an infinite water phantom. This is done using either TG-43 or a database of Monte Carlo (MC) dose distributions. The latter can be used to account for the effects of shielding in water. Subsequently, corrections for photon attenuation, scatter, and spectral variations along medium- or low-Z inhomogeneities are made according to the radiological paths determined by ray tracing. The scatter dose is then scaled by a correction factor that depends on the distances between the point of interest, the body contour, and the source position. Dose calculations are done for phantoms with tissue and lead inserts, as well as patient plans for head-and-neck, esophagus, and MammoSite balloon breast brachytherapy treatments. Gamma indices are evaluated using a dose-difference criterion of 3% and a distance-to-agreement criterion of 2 mm. PTRANCT MC calculations are used as the reference dose distributions. Results: For the phantom with tissue and lead inserts, the percentages of the voxels of interest passing the gamma criteria (Pγ≥1) are 100% for the analytical calculation and 91% for TG-43. For the breast patient plan, TG-43 overestimates the target volume receiving the prescribed dose by 4% and the dose to the hottest 0.1 cm3 of the skin by 9%, whereas the analytical and MC results agree within 0.4%. Pγ≥1 are 100% and 48% for the analytical and TG-43 calculations, respectively. For the head-and-neck and esophagus patient plans, Pγ≥1 are ≥99

  20. Monte Carlo simulation methods of determining red bone marrow dose from external radiation

    International Nuclear Information System (INIS)

    Objective: To provide evidence for a more reasonable method of determining red bone marrow dose by analyzing and comparing existing simulation methods. Methods: By utilizing Monte Carlo simulation software MCNPX, the absorbed doses of red hone marrow of Rensselaer Polytechnic Institute (RPI) adult female voxel phantom were calculated through 4 different methods: direct energy deposition.dose response function (DRF), King-Spiers factor method and mass-energy absorption coefficient (MEAC). The radiation sources were defined as infinite plate.sources with the energy ranging from 20 keV to 10 MeV, and 23 sources with different energies were simulated in total. The source was placed right next to the front of the RPI model to achieve a homogeneous anteroposterior radiation scenario. The results of different simulated photon energy sources through different methods were compared. Results: When the photon energy was lower than 100 key, the direct energy deposition method gave the highest result while the MEAC and King-Spiers factor methods showed more reasonable results. When the photon energy was higher than 150 keV taking into account of the higher absorption ability of red bone marrow at higher photon energy, the result of the King-Spiers factor method was larger than those of other methods. Conclusions: The King-Spiers factor method might be the most reasonable method to estimate the red bone marrow dose from external radiation. (authors)

  1. A Novel Method for the Evaluation of Uncertainty in Dose-Volume Histogram Computation

    International Nuclear Information System (INIS)

    Purpose: Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. Methods and Materials: To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. Results: This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Conclusions: Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger

  2. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  3. SU-E-J-96: Multi-Axis Dose Accumulation of Noninvasive Image-Guided Breast Brachytherapy Through Biomechanical Modeling of Tissue Deformation Using the Finite Element Method

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, MJ [Tufts University School of Medicine, Boston, MA (United States); Ghadyani, HR [SUNY Farmingdale State College, Farmingdale, NY (United States); Bastien, AD; Lutz, NN [Univeristy Massachusetts Lowell, Lowell, MA (United States); Hepel, JT [Rhode Island Hospital, Providence, RI (United States)

    2015-06-15

    Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over

  4. SU-E-J-96: Multi-Axis Dose Accumulation of Noninvasive Image-Guided Breast Brachytherapy Through Biomechanical Modeling of Tissue Deformation Using the Finite Element Method

    International Nuclear Information System (INIS)

    Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over

  5. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  6. Risk Management with Benchmarking

    OpenAIRE

    Suleyman Basak; Alex Shapiro; Lucie Teplá

    2005-01-01

    Portfolio theory must address the fact that, in reality, portfolio managers are evaluated relative to a benchmark, and therefore adopt risk management practices to account for the benchmark performance. We capture this risk management consideration by allowing a prespecified shortfall from a target benchmark-linked return, consistent with growing interest in such practice. In a dynamic setting, we demonstrate how a risk-averse portfolio manager optimally under- or overperforms a target benchm...

  7. A method of dose reconstruction for moving targets compatible with dynamic treatments

    Energy Technology Data Exchange (ETDEWEB)

    Rugaard Poulsen, Per; Lykkegaard Schmidt, Mai; Keall, Paul; Schjodt Worm, Esben; Fledelius, Walther; Hoffmann, Lone [Department of Oncology, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C, Institute of Clinical Medicine, Aarhus University, Brendstrupgaardsvej 100, 8200 Aarhus N (Denmark); Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, 8000 Aarhus C (Denmark); Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Department of Oncology, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C, Department of Medical Physics, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C (Denmark); Department of Oncology, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C (Denmark); Department of Medical Physics, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C (Denmark)

    2012-10-15

    Purpose: To develop a method that allows a commercial treatment planning system (TPS) to perform accurate dose reconstruction for rigidly moving targets and to validate the method in phantom measurements for a range of treatments including intensity modulated radiation therapy (IMRT), volumetric arc therapy (VMAT), and dynamic multileaf collimator (DMLC) tracking. Methods: An in-house computer program was developed to manipulate Dicom treatment plans exported from a TPS (Eclipse, Varian Medical Systems) such that target motion during treatment delivery was incorporated into the plans. For each treatment, a motion including plan was generated by dividing the intratreatment target motion into 1 mm position bins and construct sub-beams that represented the parts of the treatment that were delivered, while the target was located within each position bin. For each sub-beam, the target shift was modeled by a corresponding isocenter shift. The motion incorporating Dicom plans were reimported into the TPS, where dose calculation resulted in motion including target dose distributions. For experimental validation of the dose reconstruction a thorax phantom with a moveable lung equivalent rod with a tumor insert of solid water was first CT scanned. The tumor insert was delineated as a gross tumor volume (GTV), and a planning target volume (PTV) was formed by adding margins. A conformal plan, two IMRT plans (step-and-shoot and sliding windows), and a VMAT plan were generated giving minimum target doses of 95% (GTV) and 67% (PTV) of the prescription dose (3 Gy). Two conformal fields with MLC leaves perpendicular and parallel to the tumor motion, respectively, were generated for DMLC tracking. All treatment plans were delivered to the thorax phantom without tumor motion and with a sinusoidal tumor motion. The two conformal fields were delivered with and without portal image guided DMLC tracking based on an embedded gold marker. The target dose distribution was measured with a

  8. Critical dose threshold for TL dose response non-linearity: Dependence on the method of analysis: It’s not only the data

    International Nuclear Information System (INIS)

    It is demonstrated that the method of data analysis, i.e., the method of the phenomenological/theoretical interpretation of dose response data, can greatly influence the estimation of the onset of deviation from dose response linearity of the high temperature thermoluminescence in LiF:Mg,Ti (TLD-100).

  9. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  10. A novel method for the evaluation of uncertainty in dose volume histogram computation

    CERN Document Server

    Cutanda-Henriquez, Francisco

    2007-01-01

    Dose volume histograms are a useful tool in state-of-the-art radiotherapy planning, and it is essential to be aware of their limitations. Dose distributions computed by treatment planning systems are affected by several sources of uncertainty such as algorithm limitations, measurement uncertainty in the data used to model the beam and residual differences between measured and computed dose, once the model is optimized. In order to take into account the effect of uncertainty, a probabilistic approach is proposed and a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal or greater than a certain value is found using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a relationship is given for practical computations. This method is applied to a set of dose volume histograms for different regions of interest for 6 brain pat...

  11. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  12. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  13. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as...... important; (2) will, that activists and issue entrepreneurs will carry the message forward; and (3) expertise, that benchmarks created can be defended as accurate representations of what is happening on the issue of concern. We contrast two types of benchmarking cycles where salience, will, and expertise...

  14. Dose computation in conformal radiation therapy including geometric uncertainties: Methods and clinical implications

    Science.gov (United States)

    Rosu, Mihaela

    The aim of any radiotherapy is to tailor the tumoricidal radiation dose to the target volume and to deliver as little radiation dose as possible to all other normal tissues. However, the motion and deformation induced in human tissue by ventilatory motion is a major issue, as standard practice usually uses only one computed tomography (CT) scan (and hence one instance of the patient's anatomy) for treatment planning. The interfraction movement that occurs due to physiological processes over time scales shorter than the delivery of one treatment fraction leads to differences between the planned and delivered dose distributions. Due to the influence of these differences on tumors and normal tissues, the tumor control probabilities and normal tissue complication probabilities are likely to be impacted upon in the face of organ motion. In this thesis we apply several methods to compute dose distributions that include the effects of the treatment geometric uncertainties by using the time-varying anatomical information as an alternative to the conventional Planning Target Volume (PTV) approach. The proposed methods depend on the model used to describe the patient's anatomy. The dose and fluence convolution approaches for rigid organ motion are discussed first, with application to liver tumors and the rigid component of the lung tumor movements. For non-rigid behavior a dose reconstruction method that allows the accumulation of the dose to the deforming anatomy is introduced, and applied for lung tumor treatments. Furthermore, we apply the cumulative dose approach to investigate how much information regarding the deforming patient anatomy is needed at the time of treatment planning for tumors located in thorax. The results are evaluated from a clinical perspective. All dose calculations are performed using a Monte Carlo based algorithm to ensure more realistic and more accurate handling of tissue heterogeneities---of particular importance in lung cancer treatment planning.

  15. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  16. Benchmarking of the Symbolic Machine Learning classifier with state of the art image classification methods - application to remote sensing imagery

    OpenAIRE

    PESARESI Martino; SYRRIS VASILEIOS; JULEA ANDREEA MARIA

    2015-01-01

    A new method for satellite data classification is presented. The method is based on symbolic machine learning (SML) techniques and is designed for working in complex and information-abundant environments, where it is important to assess relationships between different data layers in model-free and computational-effective modalities. In particular, the method is tailored for operating in earth observation data scenarios connoted by the following characteristics: i) they are made by a large nu...

  17. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    OpenAIRE

    Lutsker, Vitalij; Aradi, Balint; Niehaus, Thomas A.

    2015-01-01

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation ...

  18. ARN Training Course on Advance Methods for Internal Dose Assessment: Application of Ideas Guidelines

    International Nuclear Information System (INIS)

    Dose assessment in case of internal exposure involves the estimation of committed effective dose based on the interpretation of bioassay measurement, and the assumptions of hypotheses on the characteristics of the radioactive material and the time pattern and the pathway of intake. The IDEAS Guidelines provide a method to harmonize dose evaluations using criteria and flow chart procedures to be followed step by step. The EURADOS Working Group 7 'Internal Dosimetry', in collaboration with IAEA and Czech Technical University (CTU) in Prague, promoted the 'EURADOS/IAEA Regional Training Course on Advanced Methods for Internal Dose Assessment: Application of IDEAS Guidelines' to broaden and encourage the use of IDEAS Guidelines, which took place in Prague (Czech Republic) from 2-6 February 2009. The ARN identified the relevance of this training and asked for a place for participating on this activity. After that, the first training course in Argentina took place from 24-28 August for training local internal dosimetry experts. (authors)

  19. A Retrospective Dosimetry Method for Occupational Dose for Chinese Medical Diagnostic X-Ray Workers

    International Nuclear Information System (INIS)

    In order to provide reasonable and reliable dose information for the cohort study of Chinese medical diagnostic X ray workers, a retrospective dosimetry method was established. Based on the principal character of occupational exposure of the workers, a mathematical model was developed and relative coefficients of the model were determined, and the model was computerised. For dose estimation by this model, a sampling survey of occupational history of the workers was conducted, and a data bank on occupational history was established. Using the data bank and the model, dose analysis was conducted. Some of the main results are reported here. (author)

  20. Intracavitary after loading techniques, advantages and disadvantages with high and low dose-rate methods

    International Nuclear Information System (INIS)

    Even though suggested as early as 1903, it is only when suitable sealed gamma sources became available, afterloading methods could be developed for interstitial as well as intracavitary work. Manual afterloading technique can be used only for low dose rate irradiation, while remote controlled afterloading technique can be used for both low and high dose-rate irradiation. Afterloading units used at the Karolinska Institute, Stockholm, are described, and experience of their use is narrated briefly. (M.G.B.)

  1. A method to acquire CT organ dose map using OSL dosimeters and ATOM anthropomorphic phantoms

    OpenAIRE

    Zhang, Da; Li, Xinhua; Gao, Yiming; Xu, X. George; Liu, Bob

    2013-01-01

    Purpose: To present the design and procedure of an experimental method for acquiring densely sampled organ dose map for CT applications, based on optically stimulated luminescence (OSL) dosimeters “nanoDots” and standard ATOM anthropomorphic phantoms; and to provide the results of applying the method—a dose data set with good statistics for the comparison with Monte Carlo simulation result in the future.

  2. Application of accelerated evaluation method of alteration temperature and constant dose rate irradiation on bipolar linear regulator LM317

    International Nuclear Information System (INIS)

    With different irradiation methods including high dose rate irradiation, low dose rate irradiation, alteration temperature and constant dose rate irradiation, and US military standard constant high temperature and constant dose rate irradiation, the ionizing radiation responses of bipolar linear regulator LM317 from three different companies were investigated under the operating and zero biases. The results show that compared with constant high temperature and constant dose rate irradiation method, the alteration temperature and constant dose rate irradiation method can not only very rapidly and accurately evaluate the dose rate effect of three bipolar linear regulators, but also well simulate the damage of low dose rate irradiation. Experiment results make the alteration temperature and constant dose rate irradiation method successfully apply to bipolar linear regulator. (authors)

  3. Application of the dose rate spectroscopy to the dose-to-curie conversion method using a NaI(Tl) detector

    International Nuclear Information System (INIS)

    Dose rate spectroscopy is a very useful method to directly calculate the individual dose rate from the converted energy spectrum for the dose rate using the G-factor which is related to the used detector response function. A DTC conversion method for the estimation of the radioactivity based on the measured dose rate from the radioactive materials can then be modified into a simple equation using the dose rate spectroscopy. In order to make the method validation of the modified DTC conversion method, experimental verifications using a 3″φx3″ NaI(Tl) detector were conducted at the simple geometry of the point source located onto a detector and more complex geometries which mean the assay of the simulated radioactive material. In addition, the linearity about the results from the modified DTC conversion method was also estimated by increasing the distance between source positions and a detector to confirm the method validation in the energy, dose rate, and distance range of the gamma nuclides. - Highlights: • A modified DTC conversion method using the dose rate spectroscopy was established. • In-situ calibration factors were calculated from the MCNP simulation. • Radioactivities of the disk sources were accurately calculated using a modified DTC conversion method. • A modified DTC conversion method was applied to the assay of the radioactive material

  4. A simplified method to estimate effective dose against external photon irradiation

    International Nuclear Information System (INIS)

    A simplified method to evaluate approximate organ doses and effective doses for external photon irradiation is proposed. The method uses an empirical expression as a function of organ depth, defined as the depth corresponding to the distance between the center of an organ and the body surface facing a plane source of the external photon. The age dependent effective doses were calculated by using the expression with age-specific effective depth, a weighted sum of organ depths. It was found that the effective depth at each age normalized to the depth in adults, was proportional to the cubic root of the body weight. Approximate effective doses for adults were compared with the effective doses calculated by a Monte Carlo method according to the new ICRP recommendations, published in 1991. They agreed well within 20%, except for lateral geometry. It is considered that this expression as a function of effective depth provides useful information of dose variations in age, and of applications to individual monitoring. (author)

  5. Optimization in radiotherapy treatment planning thanks to a fast dose calculation method

    International Nuclear Information System (INIS)

    This thesis deals with the radiotherapy treatments planning issue which need a fast and reliable treatment planning system (TPS). The TPS is composed of a dose calculation algorithm and an optimization method. The objective is to design a plan to deliver the dose to the tumor while preserving the surrounding healthy and sensitive tissues. The treatment planning aims to determine the best suited radiation parameters for each patient's treatment. In this thesis, the parameters of treatment with IMRT (Intensity modulated radiation therapy) are the beam angle and the beam intensity. The objective function is multi-criteria with linear constraints. The main objective of this thesis is to demonstrate the feasibility of a treatment planning optimization method based on a fast dose-calculation technique developed by (Blanpain, 2009). This technique proposes to compute the dose by segmenting the patient's phantom into homogeneous meshes. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. A reformulation of this technique makes possible to solve the optimization problem by the gradient descent algorithm. The main advantage of this method is that the beam angle parameters could be optimized continuously in 3 dimensions. The obtained results in this thesis offer many opportunities in the field of radiotherapy treatment planning optimization. (author)

  6. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    Energy Technology Data Exchange (ETDEWEB)

    Almen, A.J.

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs.

  7. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    International Nuclear Information System (INIS)

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs

  8. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  9. Comparing the accuracy of high-dimensional neural network potentials and the systematic molecular fragmentation method: A benchmark study for all-trans alkanes.

    Science.gov (United States)

    Gastegger, Michael; Kauffmann, Clemens; Behler, Jörg; Marquetand, Philipp

    2016-05-21

    Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system's total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy is constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference. PMID:27208939

  10. Comparing the accuracy of high-dimensional neural network potentials and the systematic molecular fragmentation method: A benchmark study for all-trans alkanes

    Science.gov (United States)

    Gastegger, Michael; Kauffmann, Clemens; Behler, Jörg; Marquetand, Philipp

    2016-05-01

    Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system's total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy is constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.

  11. Development of fluorescent, oscillometric and photometric methods to determine absorbed dose in irradiated fruits and nuts

    International Nuclear Information System (INIS)

    To ensure suitable quality control at food irradiation technologies and for quarantine authorities, simple routine dosimetry methods are needed for absorbed dose control. Taking into account the requirements at quarantine locations these methods would require nondestructive analysis for repeated measurements. Different dosimetry systems with different analytical evaluation methods have been tested and/or developed for absorbed dose measurements in the dose range of 0.1-10 kGy. In order to use the well accepted ethanolmonochlorobenzene dosimeter solution and the recently developed aqueous alanine solution in small volume sealed vials, a new portable, digital, and programmable oscillometric reader was developed. To make use of the availability of the very sensitive fluorimetric evaluation method, liquid and solid inorganic and organic dosimetry systems were developed for dose control using a new routine, portable, and computer controlled fluorimeter. Absorption or transmission photometric methods were also applied for dose measurements of solid or liquid phase dosimeter systems containing radiochromic dye agents, which change colour upon irradiation. (author)

  12. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  13. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  14. Benchmarking for major producers of limestone in the Czech Republic

    OpenAIRE

    Vaněk, Michal; Mikoláš, Milan; Bora, Petr

    2013-01-01

    The validity of information available to managers influences the quality of the decision-making processes controlled by those managers. Benchmarking is a method which can yield quality information. The importance of benchmarking is strengthened by the fact that many authors consider benchmarking to be an integral part of strategic management. In commercial practice, benchmarking data and conclusions usually become commercial secrets for internal use only. The wider professional public lacks t...

  15. Benchmarking of corporate social responsibility: Methodological problems and robustness.

    OpenAIRE

    Graafland, J.J.; Eijffinger, S.C.W.; Smid, H.

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  16. SU-E-T-91: Correction Method to Determine Surface Dose for OSL Detectors

    International Nuclear Information System (INIS)

    Purpose: OSL detectors are commonly used in clinic due to their numerous advantages, such as linear response, negligible energy, angle and temperature dependence in clinical range, for verification of the doses beyond the dmax. Although, due to the bulky shielding envelope, this type of detectors fails to measure skin dose, which is an important assessment of patient ability to finish the treatment on time and possibility of acute side effects. This study aims to optimize the methodology of determination of skin dose for conventional accelerators and a flattening filter free Tomotherapy. Methods: Measurements were done for x-ray beams: 6 MV (Varian Clinac 2300, 10×10 cm2 open field, SSD = 100 cm) and for 5.5 MV (Tomotherapy, 15×40 cm2 field, SAD = 85 cm). The detectors were placed at the surface of the solid water phantom and at the reference depth (dref=1.7cm (Varian 2300), dref =1.0 cm (Tomotherapy)). The measurements for OSLs were related to the externally exposed OSLs measurements, and further were corrected to surface dose using an extrapolation method indexed to the baseline Attix ion chamber measurements. A consistent use of the extrapolation method involved: 1) irradiation of three OSLs stacked on top of each other on the surface of the phantom; 2) measurement of the relative dose value for each layer; and, 3) extrapolation of these values to zero thickness. Results: OSL measurements showed an overestimation of surface doses by the factor 2.31 for Varian 2300 and 2.65 for Tomotherapy. The relationships: SD2300 = 0.68 × M2300-12.7 and SDτoμo = 0.73 × Mτoμo-13.1 were found to correct the single OSL measurements to surface doses in agreement with Attix measurements to within 0.1% for both machines. Conclusion: This work provides simple empirical relationships for surface dose measurements using single OSL detectors

  17. A method for converting dose-to-medium to dose-to-tissue in Monte Carlo studies of gold nanoparticle-enhanced radiotherapy

    Science.gov (United States)

    Koger, B.; Kirkby, C.

    2016-03-01

    Gold nanoparticles (GNPs) have shown potential in recent years as a means of therapeutic dose enhancement in radiation therapy. However, a major challenge in moving towards clinical implementation is the exact characterisation of the dose enhancement they provide. Monte Carlo studies attempt to explore this property, but they often face computational limitations when examining macroscopic scenarios. In this study, a method of converting dose from macroscopic simulations, where the medium is defined as a mixture containing both gold and tissue components, to a mean dose-to-tissue on a microscopic scale was established. Monte Carlo simulations were run for both explicitly-modeled GNPs in tissue and a homogeneous mixture of tissue and gold. A dose ratio was obtained for the conversion of dose scored in a mixture medium to dose-to-tissue in each case. Dose ratios varied from 0.69 to 1.04 for photon sources and 0.97 to 1.03 for electron sources. The dose ratio is highly dependent on the source energy as well as GNP diameter and concentration, though this effect is less pronounced for electron sources. By appropriately weighting the monoenergetic dose ratios obtained, the dose ratio for any arbitrary spectrum can be determined. This allows complex scenarios to be modeled accurately without explicitly simulating each individual GNP.

  18. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt at...

  19. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  20. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  1. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  2. Research of photon beam dose deposition kernel based on Monte Carlo method

    International Nuclear Information System (INIS)

    Using Monte Carlo program BEAMnrc to simulate Siemens accelerator 6 MV photon beam, using BEAMdp program to analyse the energy spectrum distribution and mean energy from phase space data of different field sizes, then building beam source, energy spectrum and mono-energy source, to use DOSXYZnrc program to calculate the dose deposition kernels at dmax in standard water phantom with different beam sources and make comparison with different dose deposition kernels. The results show that the dose difference using energy spectrum source is small, the maximum percentage dose discrepancy is 1.47%, but it is large using mono-energy source, which is 6.28%. The maximum dose difference for the kernels derived from energy spectrum source and mono-energy source of the same field is larger than 9%, up to 13.2%. Thus, dose deposition has dependence on photon energy, it can lead to larger errors only using mono-energy source because of the beam spectrum distribution of accelerator. A good method to calculate dose more accurately is to use deposition kernel of energy spectrum source. (authors)

  3. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  4. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  5. Using the Monte Carlo method for assessing the tissue and organ doses of patients in dental radiography

    Science.gov (United States)

    Makarevich, K. O.; Minenko, V. F.; Verenich, K. A.; Kuten, S. A.

    2016-05-01

    This work is dedicated to modeling dental radiographic examinations to assess the absorbed doses of patients and effective doses. For simulating X-ray spectra, the TASMIP empirical model is used. Doses are assessed on the basis of the Monte Carlo method by using MCNP code for voxel phantoms of ICRP. The results of the assessment of doses to individual organs and effective doses for different types of dental examinations and features of X-ray tube are presented.

  6. Radioactivity in food and the environment: calculations of UK radiation doses using integrated assessment methods

    International Nuclear Information System (INIS)

    A new method for estimating radiation doses to UK critical groups is proposed for discussion. Amongst others, the Food Standards Agency (FSA) and the Scottish Environment Protection Agency (SEPA) undertake surveillance of UK food and the environment as a check on the effect of discharges of radioactive wastes. Discharges in gaseous and liquid form are made under authorisation by the Environment Agency and SEPA under powers in the Radioactive Substance Act. Results of surveillance by the FSA and SEPA are published in the Radioactivity in Food and the Environment (RIFE) report series. In these reports, doses to critical groups are normally estimated separately for gaseous and liquid discharge pathways. Simple summation of these doses would tend to overestimate doses actually received. Three different methods of combining the effects of both types of discharge in an integrated assessment are considered and ranked according to their ease of application, transparency, scientific rigour and presentational issues. A single integrated assessment method is then chosen for further study. Doses are calculated for surveillance data for the calendar year 2000 and compared with those from the existing RIFE method

  7. Fluoxetine Dose and Administration Method Differentially Affect Hippocampal Plasticity in Adult Female Rats

    Directory of Open Access Journals (Sweden)

    Jodi L. Pawluski

    2014-01-01

    Full Text Available Selective serotonin reuptake inhibitor medications are one of the most common treatments for mood disorders. In humans, these medications are taken orally, usually once per day. Unfortunately, administration of antidepressant medications in rodent models is often through injection, oral gavage, or minipump implant, all relatively stressful procedures. The aim of the present study was to investigate how administration of the commonly used SSRI, fluoxetine, via a wafer cookie, compares to fluoxetine administration using an osmotic minipump, with regards to serum drug levels and hippocampal plasticity. For this experiment, adult female Sprague-Dawley rats were divided over the two administration methods: (1 cookie and (2 osmotic minipump and three fluoxetine treatment doses: 0, 5, or 10 mg/kg/day. Results show that a fluoxetine dose of 5 mg/kg/day, but not 10 mg/kg/day, results in comparable serum levels of fluoxetine and its active metabolite norfluoxetine between the two administration methods. Furthermore, minipump administration of fluoxetine resulted in higher levels of cell proliferation in the granule cell layer (GCL at a 5 mg dose compared to a 10 mg dose. Synaptophysin expression in the GCL, but not CA3, was significantly lower after fluoxetine treatment, regardless of administration method. These data suggest that the administration method and dose of fluoxetine can differentially affect hippocampal plasticity in the adult female rat.

  8. Target volume uncertainty and a method to visualize its effect on the target dose prescription

    International Nuclear Information System (INIS)

    Purpose: To consider the uncertainty in the construction of target boundaries for optimization, and to demonstrate how the principles of mathematical programming can be applied to determine and display the effect on the tumor dose of making small changes to the target boundary. Methods: The effect on the achievable target dose of making successive small shifts to the target boundary within its range of uncertainty was found by constructing a mixed-integer linear program that automated the placement of the beam angles using the initial target volume. Results: The method was demonstrated using contours taken from a nasopharynx case, with dose limits placed on surrounding structures. In the illustrated case, enlarging the target anteriorly to provide greater assurance of disease coverage did not force a sacrifice in the minimum or mean tumor doses. However, enlarging the margin posteriorly, near a critical structure, dramatically changed the minimum, mean, and maximum tumor doses. Conclusion: Tradeoffs between the position of the target boundary and the minimum target dose can be developed using mixed-integer programming, and the results projected as a guide to contouring and plan selection

  9. Verification of lung dose in an anthropomorphic phantom calculated by the collapsed cone convolution method

    International Nuclear Information System (INIS)

    Verification of calculated lung dose in an anthropomorphic phantom is performed using two dosimetry media. Dosimetry is complicated by factors such as variations in density at slice interfaces and appropriate position on CT scanning slice to accommodate these factors. Dose in lung for a 6 MV and 10 MV anterior-posterior field was calculated with a collapsed cone convolution method using an ADAC Pinnacle, 3D planning system. Up to 5% variations between doses calculated at the centre and near the edge of the 2 cm phantom slice positioned at the beam central axis were seen, due to the composition of each phantom slice. Validation of dose was performed with LiF thermoluminescent dosimeters (TLDs) and X-Omat V radiographic film. Both dosimetry media produced dose results which agreed closely with calculated results nearest their physical positioning in the phantom. The collapsed cone convolution method accurately calculates dose within inhomogeneous lung regions at 6 MV and 10 MV x-ray energy. (author)

  10. Monte Carlo methods for direct calculation of 3D dose distributions for photon fields in radiotherapy

    International Nuclear Information System (INIS)

    Even with state of the art treatment planning systems the photon dose calculation can be erroneous under certain circumstances. In these cases Monte Carlo methods promise a higher accuracy. We have used the photon transport code CHILD of the GSF-Forschungszentrum, which was developed to calculate dose in diagnostic radiation protection matters. The code was refined for application in radiotherapy for high energy photon irradiation and should serve for dose verification in individual cases. The irradiation phantom can be entered as any desired 3D matrix or be generated automatically from an individual CT database. The particle transport takes into account pair production, photo, and Compton effect with certain approximations. Efficiency is increased by the method of 'fractional photons'. The generated secondary electrons are followed by the unscattered continuous-slowing-down-approximation (CSDA). The developed Monte Carlo code Monaco Matrix was tested with simple homogeneous and heterogeneous phantoms through comparisons with simulations of the well known but slower EGS4 code. The use of a point source with a direction independent energy spectrum as simplest model of the radiation field from the accelerator head is shown to be sufficient for simulation of actual accelerator depth dose curves. Good agreement (<2%) was found for depth dose curves in water and in bone. With complex test phantoms and comparisons with EGS4 calculated dose profiles some drawbacks in the code were found. Thus, the implementation of the electron multiple-scattering should lead us to step by step improvement of the algorithm. (orig.)

  11. Induction method of pulse gamma-radiation exposure dose rate measurement

    Energy Technology Data Exchange (ETDEWEB)

    Buber, V.B.; Stepanov, V.M.

    1984-01-01

    Induction method of measurement of pulse ..gamma..-radiation exposure dose rate is presented. The measurements are made by charge detectors of induction type. The ..gamma..-radiation dose rate is given versus the Compton current excited by irradiation. The induction method permits to measure the pulse ..gamma..-radiation exposure dose rate up to the values of 10/sup 7/ A/kg with the time resolution of 10/sup -8/ s and with the error not exceeding 20%. The method is slightly subjected to the effect of external conditions, it permits to measure the local and integral radiation characteristics; its application results in quite insignificant distortions of the ..gamma..-flux being measured.

  12. Methods to verify absorbed dose of irradiated containers and evaluation of dosimeters

    International Nuclear Information System (INIS)

    The research on dose distribution in irradiated food containers and evaluation of several methods to verify absorbed dose were carried out. The minimum absorbed dose of treated five orange containers was in the top of the highest or in the bottom of lowest container. Dmax/Dmin in this study was 1.45 irradiated in a commercial 60Co facility. The density of orange containers was about 0.391g/cm3. The evaluation of dosimeters showed that the PMMA-YL and clear PMMA dosimeters have linear relationship with dose response, and the word NOT in STERIN-125 and STERIN-300 indicators were covered completely at the dosage of 125 and 300 Gy respectively. (author)

  13. Calculational methods for estimating skin dose from electrons in Co-60 gamma-ray beams

    International Nuclear Information System (INIS)

    Several methods have been employed to calculate the relative contribution to skin dose due to scattered electrons in Co-60 γ-ray beams. Either the Klein--Nishina differential scattering probability is employed to determine the number and initial energy of electrons scattered into the direction of a detector, or a Gaussian approximation is used to specify the surface distribution of initial pencil electron beams created by parallel or diverging photon fields. Results of these calculations are compared with experimental data. In addition, that fraction of relative surface dose resulting from photon interactions in air alone is estimated and compared with data extrapolated from measurements at large source--surface distance (SSD). The contribution to surface dose from electrons generated in air is 50% or more of the total skin dose for SSDs greater than 80 cm

  14. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities

    International Nuclear Information System (INIS)

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  15. Environmental dose rate assessment of ITER using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Karimian Alireza

    2014-01-01

    Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.

  16. Rapid radiological characterization method based on the use of dose coefficients

    International Nuclear Information System (INIS)

    Intervention actions in case of radiological emergencies and exploratory radiological surveys require rapid methods for the evaluation of the range and extent of contamination. When simple and homogeneous radionuclide composition characterize the radioactive contamination, surrogate measurements can be used to reduce the costs implied by laboratory analyses and to speed-up the process of decision support. A dose-rate measurement-based methodology can be used in conjunction with adequate dose coefficients to assess radionuclide inventories and to calculate dose projections for various intervention scenarios. The paper presents the results obtained for dose coefficients in some particular exposure geometries and the methodology used for deriving dose rate guidelines from activity concentration upper levels specified as contamination limits. All calculations were performed by using the commercial software MicroShield from Grove Software Inc. A test case was selected as to meet the conditions from EPA Federal Guidance Report no. 12 (FGR12) concerning the evaluation of dose coefficients for external exposure from contaminated soil and the obtained results were compared to values given in the referred document. The geometries considered as test cases are: contaminated ground surface; - infinite extended homogeneous surface contamination and soil contaminated to a depth of 15 cm. As shown by the results, the values agree within 50% relative difference for most of the cases. The greatest discrepancies were observed for depth contamination simulation and in the case of radionuclides with complicated gamma emission and this is due to the different approach from MicroShield and FGR12. A case study is presented for validation of the methodology, where both dose rate measurements and laboratory analyses were performed on an extended quasi-homogeneous NORM contamination. The dose rate estimations obtained by applying the dose coefficients to the radionuclide concentrations

  17. In-situ gamma spectroscopy; An alternative method to evaluate external effective radiation dose

    International Nuclear Information System (INIS)

    Two types of approaches are possible to estimate radiation doses from environmental radiations:(1)Measure radiation fields in the place of interest and presume that people are exposed to the same field. (2) Actual measurement on the individual members of the population studied by the use of thermoluminescent dosimeters. (TLD). The latter approach though difficult is ideal. The objective of the present study was to investigate the possibility of using the first approach using in-situ gamma spectrometry as an alternative method to evaluate the external effective dose. The results obtained in this way provide a means of evaluating both approaches. Six houses were selected for this study from an area where an average radiation dose of 5.0 micro Sv per hour was measured using a hand held survey meter. In all study sites both TLD and in-situ measurements with portable HPGE detector were carried out. The detector was calibrated for field measurements and activity concentrations of the radionuclides identified in the gamma spectra were calculated. The calculated detector efficiency values for field measurements for 1461, 1764, and 2615 keV were 2.40, 2.03 and 1.44 respectively. External effective dose was calculated using the corresponding kerma rates for the analysed radionuclides. Evaluation of the effective dose by the two approaches are reasonably correlated (r sup 2=0.87) for dose measurements between 2.0 - 6.0 mSv. In-situ measurements gave higher values than the TL readings because in-situ data are more representative of the surrounding. This study suggests that in-situ gamma spectrometry permits rapid and efficient identification and quantification of gamma emitting radionuclides on surface and subsurface soil and can be used as an alternative rapid method to determine population doses from environmental radiations particularly in an event such as a radiation contamination. TL measurements provide only an integrated dose and would require an extended time period

  18. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    OpenAIRE

    Claudio Ferone; Francesco Colangelo; Domenico Frattini; Giuseppina Roviello; Raffaele Cioffi; Rosa di Maggio

    2014-01-01

    Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES) systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM). Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration vario...

  19. Benchmarking the Solution Accuracy of 3-Dimensional Transport Codes and Methods Over a Range in Parameter Space

    International Nuclear Information System (INIS)

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Radiation Transport and Shielding (EGRTS) was created in 2011 to perform specific tasks associated with radiation transport and shielding aspects of present and future nuclear systems and accelerator-based irradiation facilities. The EGRTS provides expert advice to the WPRS and the nuclear/accelerator communities on the development needs (data and methods, models and codes, validation experiments) for various nuclear and accelerator systems and scenarios and also provides specific technical information regarding : - 3D radiation transport codes and methods; - Pressure vessel surveillance; - Shielding and dosimetry aspects of accelerator, target and irradiation facilities; - Neutron activation and shielding. This report aims to compare the results of advanced three-dimensional transport methods and codes to high quality reference solutions

  20. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Tao, Yinghua [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics and Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Hacker, Timothy A.; Raval, Amish N. [Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Van Lysel, Michael S.; Speidel, Michael A., E-mail: speidel@wisc.edu [Department of Medical Physics and Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)

    2014-07-15

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  1. Revisiting the TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes Over a Range in Parameter Space

    International Nuclear Information System (INIS)

    Improved TORT solutions to the 3D transport codes suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported

  2. Simple Evaluation Method of Atmospheric Plasma Irradiation Dose using pH of Water

    Science.gov (United States)

    Koga, Kazunori; Sarinont, Thapanut; Amano, Takaaki; Seo, Hyunwoong; Itagaki, Naho; Nakatsu, Yoshimichi; Tanaka, Akiyo; Shiratani, Masaharu

    2015-09-01

    Atmospheric discharge plasmas are promising for agricultural productivity improvements and novel medical therapies, because plasma provides high flux of short-lifetime reactive species at low temperature, leading to low damage to living body. For the plasma-bio applications, various kinds of plasma systems are employed, thus common evaluation methods are needed to compare plasma irradiation dose quantitatively among the systems. Here we offer simple evaluation method of plasma irradiation dose using pH of water. Experiments were carried out with a scalable DBD device. 300 μl of deionized water was prepared into the quartz 96 microwell plate at 3 mm below electrode. The pH value has been measured just after 10 minutes irradiation. The pH value was evaluated as a function of plasma irradiation dose. Atmospheric air plasma irradiation decreases pH of water with increasing the dose. We also measured concentrations of chemical species such as nitrites, nitrates and H2O2. The results indicate our method is promising to evaluate plasma irradiation dose quantitatively.

  3. Verification of the method of average angular response for dose measurement on different detectors

    International Nuclear Information System (INIS)

    At present most radiation dose meters have serious problems on aspects of energy response and angular response. In order to improve the accuracy of dose measurements, a method of average angular response has been proposed. The method can not only correct the energy response, but also the angular response. This method has been verified on NaI(Tl)(50 mm× 50 mm) scintillation detectors, but has not been proved on other types and sizes of detectors, In this paper the method is also verified for LaBr3(Ce) scintillation detectors and HPGe detector To apply the method, first of all, five detectors are simulated by Geant4 and average angular response values are calculated. Then experiments are performed to get the count rates of full energy peak by standard point source of 137Cs, 60Co and 152Eu. After that the dose values of five detectors are calculated with the method of average angular response. Finally experimental results are got. These results are divided into two groups to analyze the impact of detectors of various types and sizes. The result of the first group shows that the method is appropriate for different types of detector to measure dose, with deviations of less than 5% compared with theoretical values. Moreover, when the detector's energy resolution is better and the count rate of the full energy peak is calculated more precisely, the measured dose can be obtained more precisely. At the same time, the result of the second group illustrates that the method is also suited for different sizes of detectors, with deviations of less than 8% compared with theoretical values

  4. Evaluation of Deformable Image Registration Methods for Dose Monitoring in Head and Neck Radiotherapy

    Directory of Open Access Journals (Sweden)

    Bastien Rigaud

    2015-01-01

    Full Text Available In the context of head and neck cancer (HNC adaptive radiation therapy (ART, the two purposes of the study were to compare the performance of multiple deformable image registration (DIR methods and to quantify their impact for dose accumulation, in healthy structures. Fifteen HNC patients had a planning computed tomography (CT0 and weekly CTs during the 7 weeks of intensity-modulated radiation therapy (IMRT. Ten DIR approaches using different registration methods (demons or B-spline free form deformation (FFD, preprocessing, and similarity metrics were tested. Two observers identified 14 landmarks (LM on each CT-scan to compute LM registration error. The cumulated doses estimated by each method were compared. The two most effective DIR methods were the demons and the FFD, with both the mutual information (MI metric and the filtered CTs. The corresponding LM registration accuracy (precision was 2.44 mm (1.30 mm and 2.54 mm (1.33 mm, respectively. The corresponding LM estimated cumulated dose accuracy (dose precision was 0.85 Gy (0.93 Gy and 0.88 Gy (0.95 Gy, respectively. The mean uncertainty (difference between maximal and minimal dose considering all the 10 methods to estimate the cumulated mean dose to the parotid gland (PG was 4.03 Gy (SD = 2.27 Gy, range: 1.06–8.91 Gy.

  5. A benchmark study of the two-dimensional Hubbard model with auxiliary-field quantum Monte Carlo method

    CERN Document Server

    Qin, Mingpu; Zhang, Shiwei

    2016-01-01

    Ground state properties of the Hubbard model on a two-dimensional square lattice are studied by the auxiliary-field quantum Monte Carlo method. Accurate results for energy, double occupancy, effective hopping, magnetization, and momentum distribution are calculated for interaction strengths of U/t from 2 to 8, for a range of densities including half-filling and n = 0.3, 0.5, 0.6, 0.75, and 0.875. At half-filling, the results are numerically exact. Away from half-filling, the constrained path Monte Carlo method is employed to control the sign problem. Our results are obtained with several advances in the computational algorithm, which are described in detail. We discuss the advantages of generalized Hartree-Fock trial wave functions and its connection to pairing wave functions, as well as the interplay with different forms of Hubbard-Stratonovich decompositions. We study the use of different twist angle sets when applying the twist averaged boundary conditions. We propose the use of quasi-random sequences, whi...

  6. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  7. A Blind Test Experiment in Volcano Geodesy: a Benchmark for Inverse Methods of Ground Deformation and Gravity Data

    Science.gov (United States)

    D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas

    2016-04-01

    The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source

  8. Full CI benchmark calculations on N2, NO, and O2 - A comparison of methods for describing multiple bonds

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1987-01-01

    Full configuration interaction (CI) calculations on the ground states of N2, NO, and O2 using a DZP Gaussian basis are compared with single-reference SDCI and coupled pair approaches (CPF), as well as with CASSCF multireference CI approaches. The CASSCF/MRCI technique is found to describe multiple bonds as well as single bonds. Although the coupled pair functional approach gave chemical accuracy (1 kcal/mol) for bonds involving hydrogen, larger errors occur in the CPF approach for the multiple bonded systems considered here. CI studies on the 1Sigma(g +) state of N2, including all single, double, triple, and quadruple excitations show that triple excitations are very important for the multiple bond case, and accounts for most of the deficiency in the coupled pair functional methods.

  9. A method to efficiently simulate absorbed dose in radio-sensitive instrumentation components

    International Nuclear Information System (INIS)

    Components installed in tunnels of high-power accelerators are prone to radiation-induced damage and malfunction. Such machines are usually modeled in detail and the radiation cascades are transported through the three-dimensional models in Monte Carlo codes. Very often those codes are used to compute energy deposition in beam components or radiation fields to the public and the environment. However, sensitive components such as electronic boards or insulator cables are less easily simulated, as their small size makes dose scoring a (statistically) inefficient process. Moreover the process to decide their location is iterative, as in order to define where these will be safely installed, the dose needs to be computed, but to do so the location needs to be known. This note presents a different approach to indirectly asses the potential absorbed dose by certain components when those are installed within a given radiation field. The method consists first in finding the energy and particle-dependent absorbed dose to fluence response function, and then programming those in a radiation transport Monte Carlo code, so that fluences in vacuum/air can be automatically converted real-time into potential absorbed doses and then mapped in the same way as fluences or dose equivalent magnitudes

  10. Determination of gelation doses of gamma-irradiated hydrophilic polymers by different methods

    International Nuclear Information System (INIS)

    Poly(acrylic acid) and poly(vinyl pyrrolidone) are hydrophilic polymers. Poly(acrylic acid) is a polyelectrolyte which ionizes in water to produce an electrically conducting medium. In this study, it has been shown that the gelation dose of poly(acrylic acid) can be determined by conductimetric and titrimetric methods with NaOH and measuring pH of aqueous solution of γ-irradiated polymer. In order to develop new, simpler and rapid methods for the determination of gelation dose of PVP, its complexation with gallic acid in dilute aqueous solution has been used. The complex formation between gallic acid and irradiated PVP in aqueous solutions is followed by UV-vis spectroscopy. The reliability of the dose value found, 120 kGy for poly(acrylic acid) and 140 kGy for poly(vinyl pyrrolidone), are also verified by viscometric and solubility measurements. (author)

  11. Determination of gelation doses of gamma-irradiated hydrophilic polymers by different methods

    Energy Technology Data Exchange (ETDEWEB)

    Yigit, Fatma; Tekin, Niket; Erkan, Sevin; Gueven, Olgun (Hacettepe Univ., Ankara (Turkey). Dept. of Chemistry)

    1994-04-01

    Poly(acrylic acid) and poly(vinyl pyrrolidone) are hydrophilic polymers. Poly(acrylic acid) is a polyelectrolyte which ionizes in water to produce an electrically conducting medium. In this study, it has been shown that the gelation dose of poly(acrylic acid) can be determined by conductimetric and titrimetric methods with NaOH and measuring pH of aqueous solution of [gamma]-irradiated polymer. In order to develop new, simpler and rapid methods for the determination of gelation dose of PVP, its complexation with gallic acid in dilute aqueous solution has been used. The complex formation between gallic acid and irradiated PVP in aqueous solutions is followed by UV-vis spectroscopy. The reliability of the dose value found, 120 kGy for poly(acrylic acid) and 140 kGy for poly(vinyl pyrrolidone), are also verified by viscometric and solubility measurements. (author).

  12. Methods used to estimate the collective dose in Denmark from diagnostic radiology

    International Nuclear Information System (INIS)

    According to EU directive 97/43/Euratom all member states must estimate doses to the public from diagnostic radiology. In Denmark the National Institute of Radiation Hygiene (NIRH) is about to finish a project with the purpose of estimating the collective dose in Denmark from diagnostic radiology. In this paper methods, problems and preliminary results will be presented. Patient doses were obtained from x-ray departments, dentist and chiropractors. Information about the frequencies of examination was collected from each of the Danish hospitals or counties. It was possible to collect information for nearly all of the hospitals. The measurements were done by means of dose area product meters in x-ray departments and by thermoluminescent dosimetry at chiropractors and solid-sate detectors at dentists. Twenty hospitals, 3,200 patients and 23,000 radiographs were measured in this study. All data were stored in a database for quick retrieval. The DAP (Dose Area Product) measurements was done 'automatically' controlled by PC based software. Later these recordings could be analysed by means of specially designed software and transferred to the database. Data from the chiropractors were obtained by mail. NIRH sent each chiropractor TLD's and registration form. The chiropractor did the measurements him self and returned afterwards the TLD's and registration forms. On the registration form height, weight, age etc. of the patient was noted and so was information about applied high-tension, current-time product and projection. Calculation of the effective dose from the DAP values and the surface entrance dose were done by Monte Carlo techniques. For each radiographs two pictures of the mathematical phantom were generated to ensure that the x-ray field where properly placed. The program 'diagnostic dose' developed by NIRH did the Monte Carlo calculations. (author)

  13. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  14. Application of combined TLD and CR-39 PNTD method for measurement of total dose and dose equivalent on ISS

    International Nuclear Information System (INIS)

    To date, no single passive detector has been found that measures dose equivalent from ionizing radiation exposure in low-Earth orbit. We have developed the I.S.S. Passive Dosimetry System (P.D.S.), utilizing a combination of TLD in the form of the self-contained Pille TLD system and stacks of CR-39 plastic nuclear track detector (P.N.T.D.) oriented in three mutually orthogonal directions, to measure total dose and dose equivalent aboard the International Space Station (I.S.S.). The Pille TLD system, consisting on an on board reader and a large number of Ca2SO4:Dy TLD cells, is used to measure absorbed dose. The Pille TLD cells are read out and annealed by the I.S.S. crew on orbit, such that dose information for any time period or condition, e.g. for E.V.A. or following a solar particle event, is immediately available. Near-tissue equivalent CR-39 P.N.T.D. provides Let spectrum, dose, and dose equivalent from charged particles of LET∞H2O ≥ 10 keV/μm, including the secondaries produced in interactions with high-energy neutrons. Dose information from CR-39 P.N.T.D. is used to correct the absorbed dose component ≥ 10 keV/μm measured in TLD to obtain total dose. Dose equivalent from CR-39 P.N.T.D. is combined with the dose component <10 keV/μm measured in TLD to obtain total dose equivalent. Dose rates ranging from 165 to 250 μGy/day and dose equivalent rates ranging from 340 to 450 μSv/day were measured aboard I.S.S. during the Expedition 2 mission in 2001. Results from the P.D.S. are consistent with those from other passive detectors tested as part of the ground-based I.C.C.H.I.B.A.N. intercomparison of space radiation dosimeters. (authors)

  15. Determination of gelation doses of gamma-irradiated hydrophilic polymers by different methods

    Science.gov (United States)

    Yiǧit, Fatma; Tekin, Niket; Erkan, Sevin; Güven, Olgun

    1994-04-01

    Poly(acrylic acid) and poly(vinyl pyrrolidone) are hydrophilic polymers. Poly(acrylic acid) is a polyelectrolyte which ionizes in water to produce an electrically conducting medium. Therefore, the gelation dose of poly(acrylic acid) can be determined by conductometric titration, simple titration and the measurement of pH. The conventional techniques of determining gelation dose are very time and material consuming especially for poly(acrylic acid) and subject to serious errors due to its electrolytic behavior. In this study, it has been shown that the gelation dose of poly(acrylic acid) can be determined by conductimetric and titrimetric methods with NaOH and measuring pH of aqueous solution of γ-irradiated polymer. In order to develop new, simpler and rapid methods for the determination of gelation dose of PVP, its complexation with gallic acid in dilute aqueous solution has been used. The complex formation between gallic acid and irradiated PVP in aqueous solutions is followed by UV-vis spectroscopy. The reliability of the dose value found, 120 kGy for poly(acrylic acid) and 140 kGy for poly(vinyl pyrrolidone), are also verified by viscometric and solubility measurements.

  16. Dose calculation method with 60-cobalt gamma rays in total body irradiation

    CERN Document Server

    Scaff, L A M

    2001-01-01

    Physical factors associated to total body irradiation using sup 6 sup 0 Co gamma rays beams, were studied in order to develop a calculation method of the dose distribution that could be reproduced in any radiotherapy center with good precision. The method is based on considering total body irradiation as a large and irregular field with heterogeneities. To calculate doses, or doses rates, of each area of interest (head, thorax, thigh, etc.), scattered radiation is determined. It was observed that if dismagnified fields were considered to calculate the scattered radiation, the resulting values could be applied on a projection to the real size to obtain the values for dose rate calculations. In a parallel work it was determined the variation of the dose rate in the air, for the distance of treatment, and for points out of the central axis. This confirm that the use of the inverse square law is not valid. An attenuation curve for a broad beam was also determined in order to allow the use of absorbers. In this wo...

  17. Blind method of clustering for the evaluation of the dose received by personnel in two methods of administration of radiopharmaceuticals

    International Nuclear Information System (INIS)

    The difficulty for the injection of drugs marked with radioactive isotopes while syringe is located within the lead protector does that in many cases staff do it chooses to use the syringe outside the lead protector, increasing therefore the dose of radiation received. In our service raises the possibility of using a different methodology, channeling a pathway through a catheter, which allows administer, in all cases, with the syringe within the lead guard. We will check if significant differences can be seen both in the dose absorbed by the staff as in the time it takes to perform the administration of the drug using the method proposed compared injection without guard. (Author)

  18. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  19. A continuous OSL scanning method for analysis of radiation depth-dose profiles in bricks

    DEFF Research Database (Denmark)

    Bøtter-Jensen, L.; Jungner, H.; Poolton, N.R.J.

    1995-01-01

    This article describes the development of a method for directly measuring radiation depth-dose profiles from brick, tile and porcelain cores, without the need for sample separation techniques. For the brick cores, examples are shown of the profiles generated by artificial irradiation using the...

  20. A New System For Recording The Radiological Effective Doses For Pacients Investigated by Imaging Methods

    CERN Document Server

    Stanciu, Silviu

    2014-01-01

    In this paper the project of an integrated system for radiation safety and security of the patients investigated by radiological imaging methods is presented. The new system is based on smart cards and Public Key Infrastructure. The new system allows radiation effective dose data storage and a more accurate reporting system.

  1. A novel dose-based positioning method for CT image-guided proton therapy

    OpenAIRE

    Cheung, Joey P.; Park, Peter C.; Court, Laurence E.; Ronald Zhu, X.; Kudchadker, Rajat J.; Frank, Steven J.; Dong, Lei

    2013-01-01

    Purpose: Proton dose distributions can potentially be altered by anatomical changes in the beam path despite perfect target alignment using traditional image guidance methods. In this simulation study, the authors explored the use of dosimetric factors instead of only anatomy to set up patients for proton therapy using in-room volumetric computed tomographic (CT) images.

  2. Application of the dose rate spectroscopy to the dose-to-curie conversion method using a NaI(Tl) detector

    Science.gov (United States)

    JI, Young-Yong; Chung, Kun Ho; Kim, Chang-Jong; kang, Mun Ja; Park, Sang Tae

    2015-01-01

    Dose rate spectroscopy is a very useful method to directly calculate the individual dose rate from the converted energy spectrum for the dose rate using the G-factor which is related to the used detector response function. A DTC conversion method for the estimation of the radioactivity based on the measured dose rate from the radioactive materials can then be modified into a simple equation using the dose rate spectroscopy. In order to make the method validation of the modified DTC conversion method, experimental verifications using a 3″φx3″ NaI(Tl) detector were conducted at the simple geometry of the point source located onto a detector and more complex geometries which mean the assay of the simulated radioactive material. In addition, the linearity about the results from the modified DTC conversion method was also estimated by increasing the distance between source positions and a detector to confirm the method validation in the energy, dose rate, and distance range of the gamma nuclides.

  3. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  4. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  5. Application of Monte Carlo method for dose calculation in thyroid follicle

    International Nuclear Information System (INIS)

    The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)

  6. Influence of surgery method and post-operative irradiation dose in case of mastocarcinomas

    International Nuclear Information System (INIS)

    The authors present the therapy results and side effects of 422 patients who had been submitted to a radical or partially radical operation or to ablatio mammae. The post-operative irradiation was effected under high voltage conditions with doses between 40 and 50 Gy corresponding to the stage. The survival times were not influenced either by the surgery method or by the post-operative irradiation dose. As compared to radical surgery, edemas of the arm were significantly less frequent and serious after a less radical operation. No significant increase of recurrences was observed after partially radical operations. (orig.)

  7. Influence of surgery method and post-operative irradiation dose in case of mastocarcinomas

    Energy Technology Data Exchange (ETDEWEB)

    Koch, H.L.; Voss, A.C.

    1982-03-01

    The authors present the therapy results and side effects of 422 patients who had been submitted to a radical or partially radical operation or to ablatio mammae. The post-operative irradiation was effected under high voltage conditions with doses between 40 and 50 Gy corresponding to the stage. The survival times were not influenced either by the surgery method or by the post-operative irradiation dose. As compared to radical surgery, edemas of the arm were significantly less frequent and serious after a less radical operation. No significant increase of recurrences was observed after partially radical operations.

  8. Explanation of method of dose estimation using chromosome translocation with fluorescence in situ hybridization

    International Nuclear Information System (INIS)

    National occupational health standard-Method of Dose Estimation Using Chromosome Translocation with Fluorescence in Situ Hybridization has been developed, which based on comprehensive collection, reading and analysis of relevant literature both abroad and domestic and the existing diagnostic criteria for radiation diseases of the China, and the repeated experimental verification. This standard is mainly applied for dose estimation for individuals who previously exposed to irradiation by accident, and provide the scientific diagnosis of radiation sickness. To better understand and implement it, the contents of this standard were interpreted in this article. (authors)

  9. A robustness analysis method with fast estimation of dose uncertainty distributions for carbon-ion therapy treatment planning

    Science.gov (United States)

    Sakama, Makoto; Kanematsu, Nobuyuki; Inaniwa, Taku

    2016-08-01

    A simple and efficient approach is needed for robustness evaluation and optimization of treatment planning in routine clinical particle therapy. Here we propose a robustness analysis method using dose standard deviation (SD) in possible scenarios such as the robustness indicator and a fast dose warping method, i.e. deformation of dose distributions, taking into account the setup and range errors in carbon-ion therapy. The dose warping method is based on the nominal dose distribution and the water-equivalent path length obtained from planning computed tomography data with a clinically commissioned treatment planning system (TPS). We compared, in a limited number of scenarios at the extreme boundaries of the assumed error, the dose SD distributions obtained by the warping method with those obtained using the TPS dose recalculations. The accuracy of the warping method was examined by the standard-deviation-volume histograms (SDVHs) for varying degrees of setup and range errors for three different tumor sites. Furthermore, the influence of dose fractionation on the combined dose uncertainty, taking into consideration the correlation of setup and range errors between fractions, was evaluated with simple equations using the SDVHs and the mean value of SDs in the defined volume of interest. The results of the proposed method agreed well with those obtained with the dose recalculations in these comparisons, and the effectiveness of dose SD evaluations at the extreme boundaries of given errors was confirmed from the responsivity and DVH analysis of relative SD values for each error. The combined dose uncertainties depended heavily on the number of fractions, assumed errors and tumor sites. The typical computation time of the warping method is approximately 60 times less than that of the full dose calculation method using the TPS. The dose SD distributions and SDVHs with the fractionation effect will be useful indicators for robustness analysis in treatment planning, and the

  10. A robustness analysis method with fast estimation of dose uncertainty distributions for carbon-ion therapy treatment planning.

    Science.gov (United States)

    Sakama, Makoto; Kanematsu, Nobuyuki; Inaniwa, Taku

    2016-08-01

    A simple and efficient approach is needed for robustness evaluation and optimization of treatment planning in routine clinical particle therapy. Here we propose a robustness analysis method using dose standard deviation (SD) in possible scenarios such as the robustness indicator and a fast dose warping method, i.e. deformation of dose distributions, taking into account the setup and range errors in carbon-ion therapy. The dose warping method is based on the nominal dose distribution and the water-equivalent path length obtained from planning computed tomography data with a clinically commissioned treatment planning system (TPS). We compared, in a limited number of scenarios at the extreme boundaries of the assumed error, the dose SD distributions obtained by the warping method with those obtained using the TPS dose recalculations. The accuracy of the warping method was examined by the standard-deviation-volume histograms (SDVHs) for varying degrees of setup and range errors for three different tumor sites. Furthermore, the influence of dose fractionation on the combined dose uncertainty, taking into consideration the correlation of setup and range errors between fractions, was evaluated with simple equations using the SDVHs and the mean value of SDs in the defined volume of interest. The results of the proposed method agreed well with those obtained with the dose recalculations in these comparisons, and the effectiveness of dose SD evaluations at the extreme boundaries of given errors was confirmed from the responsivity and DVH analysis of relative SD values for each error. The combined dose uncertainties depended heavily on the number of fractions, assumed errors and tumor sites. The typical computation time of the warping method is approximately 60 times less than that of the full dose calculation method using the TPS. The dose SD distributions and SDVHs with the fractionation effect will be useful indicators for robustness analysis in treatment planning, and the

  11. Benchmarking in University Toolbox

    OpenAIRE

    Katarzyna Kuźmicz

    2015-01-01

    In the face of global competition and rising challenges that higher education institutions (HEIs) meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indica...

  12. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  13. Benchmarking conflict resolution algorithms

    OpenAIRE

    Vanaret, Charlie; Gianazza, David; Durand, Nicolas; Gotteland, Jean-Baptiste

    2012-01-01

    Applying a benchmarking approach to conflict resolution problems is a hard task, as the analytical form of the constraints is not simple. This is especially the case when using realistic dynamics and models, considering accelerating aircraft that may follow flight paths that are not direct. Currently, there is a lack of common problems and data that would allow researchers to compare the performances of several conflict resolution algorithms. The present paper introduces a benchmarking approa...

  14. Accelerator shielding benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, H.; Ban, S.; Nakamura, T. [and others

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author).

  15. Determination method of inactivating minimal dose of gama radiation for Salmonella typhimurium

    International Nuclear Information System (INIS)

    A method for determination of minimal inactivating dose (MID) with Salmonella typhimurium is presented. This is a more efficient way to improve the irradiated vaccines. The MID found for S. thyphimurium 6.616 by binomial test was 0.55 MR. The method used allows to get a definite value for MID and requires less consumption of material, work and time in comparison with the usual procedure

  16. On the absorbed dose determination method in high energy electrons beams

    International Nuclear Information System (INIS)

    The absorbed dose determination method in water for electron beams with energies in the range from 1 MeV to 50 MeV is presented herein. The dosimetry equipment for measurements is composed of an UNIDOS.PTW electrometer and different ionization chambers calibrated in air kerma in a Co60 beam. Starting from the code of practice for high energy electron beams, this paper describes the method adopted by the secondary standard dosimetry laboratory (SSDL) in NILPRP - Bucharest

  17. Benchmark solutions for the galactic heavy-ion transport equations with energy and spatial coupling

    Science.gov (United States)

    Ganapol, Barry D.; Townsend, Lawrence W.; Lamkin, Stanley L.; Wilson, John W.

    1991-01-01

    Nontrivial benchmark solutions are developed for the galactic heavy ion transport equations in the straightahead approximation with energy and spatial coupling. Analytical representations of the ion fluxes are obtained for a variety of sources with the assumption that the nuclear interaction parameters are energy independent. The method utilizes an analytical LaPlace transform inversion to yield a closed form representation that is computationally efficient. The flux profiles are then used to predict ion dose profiles, which are important for shield design studies.

  18. A study on the dose analysis of pottery shards by thermoluminescence dating method

    International Nuclear Information System (INIS)

    A method for measuring archaeological dose of Packjae pottery shards using thermoluminescence dosimetry(TLD) has been studied. TL measurement has been achieved using quartz crystals in the size range of 90 to 125 μm diameter extracted from the pottery shards. The stable temperature region of the TL glow curve which is devoid of anomalous fading components was identified by the plateau test and found to exist from 265 to 300.deg.C. The archaeological dose of the pottery shards was estimated to be 7.43 Gy using the dose calibration curves obtained from sequential irradiation of 137Cs gamma source to the samples and TL measurement of natural samples

  19. Optimaization of Calibration Method to Determine Exposure Dose on Personal Dosimeter

    International Nuclear Information System (INIS)

    Normally the determination of exposure dose on personal dosimeter can be underestimated due to the time factor in collecting ionizing charges. The aim of this study was to establish a calibration system with minimized error for the determination of exposure dose on personal dosimeter. A standard source, Cs-137 with activity of 2.89 Ci, was equipped with a control system. Error on reading the exposure dose caused by the source traveling time was compensated by reference ionizing chamber (IC). The IC unit was connected with electrometer to collect ionizing charges at preset time. In order to minimize the error, the standard source travelling time (Timer Error) was determined and the calculated time difference was applied in the calculation. Results showed that this optimizing method can reduce the error associated with the former setup by 1.79 %.

  20. Effects of different premature chromosome condensation method on dose-curve of 60Co γ-ray

    International Nuclear Information System (INIS)

    Objective: To study the effect of traditional method and improved method of the premature chromosome condensation (PCC) on the dose-effect curve of 60Co γ ray, for choosing the rapid and accurate biological dose estimating method for the accident emergency. Methods: Collected 3 healthy male cubits venous blood (23 to 28 years old), and irradiated by 0, 1.0, 5.0, 10.0, 15.0, 20.0 Gy 60Co γ ray (absorbed dose rate: 0.635 Gy/min). Observed the relation of dose-effect curve in the 2 incubation time (50 hours and 60 hours) of the traditional method and improved method. Used the dose-effect curve to verify the exposure of 10.0 Gy (absorbed dose rate: 0.670 Gy/min). Results: (1) In the traditional method of 50-hour culture, the PCC cell count in 15.0 Gy and 20.0 Gy was of no statistical significance. But there were statistical significance in the traditional method of 60-hours culture and improved method (50-hour culture and 60-hour culture). Used the last 3 culture methods to make dose curve. (2) In the above 3 culture methods, the related coefficient between PCC ring and exposure dose was quite close (all of more than 0.996, P0.05), the morphology of regression straight lines almost overlap. (3) Used the above 3 dose-effect curves to estimate the irradiation results (10.0 Gy), the error was less than or equal to 8%, all of them were within the allowable range of the biological experiment (15%). Conclusion: The 3 dose-effect curves of the above 3 culture methods can apply to biological dose estimating of large doses of ionizing radiation damage. Especially the improved method of 50-hour culture,it is much faster to estimate and it should be regarded as the first choice in accident emergency. (authors)

  1. Radiation dose determines the method for quantification of DNA double strand breaks

    International Nuclear Information System (INIS)

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. (author)

  2. Radiation dose determines the method for quantification of DNA double strand breaks

    Energy Technology Data Exchange (ETDEWEB)

    Bulat, Tanja; Keta, Olitija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra [University of Belgrade, Vinča Institute of Nuclear Sciences, Belgrade (Serbia); Todorović, Danijela, E-mail: dtodorovic@medf.kg.ac.rs [University of Kragujevac, Faculty of Medical Sciences, Kragujevac (Serbia)

    2016-03-15

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. (author)

  3. Radiation doses in diagnostic radiology and methods for dose reduction. Report of a co-ordinated research programme (1991-1993)

    International Nuclear Information System (INIS)

    It is well recognized that diagnostic radiology is the largest contributor to the collective dose from all man-made sources of radiation. Large differences in radiation doses from the same procedures among different X ray rooms have led to the conclusion that there is a potential for dose reduction. A Co-ordinated Research Programme on Radiation Doses in Diagnostic Radiology and Methods for Dose Reduction, involving Member States with different degrees of development, was launched by the IAEA in co-operation with the CEC. This report summarizes the results of the second and final Research Co-ordination Meeting held in Vienna from 4 to 8 October 1993. 22 refs, 6 figs and tabs

  4. Bundesländer-Benchmarking 2002

    OpenAIRE

    Blancke, Susanne; Hedrich, Horst; Schmid, Josef

    2002-01-01

    Das Bundesländer Benchmarking 2002 basiert auf einer Untersuchung ausgewählter Arbeitsmarkt- und Wirtschaftsindikatoren in den deutschen Bundesländern. Hierfür wurden drei Benchmarkings nach der Radar-Chart Methode vorgenommen: Eines welches nur Arbeitsmarktindikatoren betrachtet; eines, welches nur Wirtschaftsindikatoren betrachtet; und eines welches gemischte Arbeitsmarkt- und Wirtschaftsindikatoren beleuchtet. Verglichen wurden die Länder untereinander im Querschnitt zu zwei Zeitpunkten –...

  5. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes......-related achievement. We attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  6. Benchmark field study of deep neutron penetration

    Science.gov (United States)

    Morgan, J. F.; Sale, K.; Gold, R.; Roberts, J. H.; Preston, C. C.

    1991-06-01

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry.

  7. Computational benchmark for deep penetration in iron

    International Nuclear Information System (INIS)

    A benchmark for calculation of neutron transport through iron is now available based upon a rigorous Monte Carlo treatment of ENDF/B-IV and ENDF/B-V cross sections. The currents, flux, and dose (from monoenergetic 2, 14, and 40 MeV sources) have been tabulated at various distances through the slab using a standard energy group structure. This tabulation is available in a Los Alamos Scientific Laboratory report. The benchmark is simple to model and should be useful for verifying the adequacy of one-dimensional transport codes and multigroup libraries for iron. This benchmark also provides useful insights regarding neutron penetration through iron and displays differences in fluxes calculated with ENDF/B-IV and ENDF/B-V data bases

  8. Dose conversion factors for radiation doses at normal operation discharges. F. Methods report; Dosomraekningsfaktorer foer normaldriftutslaepp. F. Metodrapport

    Energy Technology Data Exchange (ETDEWEB)

    Bergstroem, Ulla; Hallberg, Bengt; Karlsson, Sara

    2001-10-01

    A study has been performed in order to develop and extend existing models for dose estimations at emissions of radioactive substances from nuclear facilities in Sweden. This report gives a review of the different exposure pathways that have been considered in the study. Radioecological data that should be used in calculations of radiation doses are based on the actual situation at the nuclear sites. Dose factors for children have been split in different age groups. The exposure pathways have been carefully re-examined, like the radioecological data; leading to some new pathways (e.g. doses from consumption of forest berries, mushrooms and game) for cesium and strontium. Carbon 14 was given a special treatment by using a model for uptake of carbon by growing plants. For exposure from aquatic emissions, a simplification was done by focussing on the territory for fish species, since consumption of fish is the most important pathway.

  9. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  10. TH-A-19A-03: Impact of Proton Dose Calculation Method On Delivered Dose to Lung Tumors: Experiments in Thorax Phantom and Planning Study in Patient Cohort

    Energy Technology Data Exchange (ETDEWEB)

    Grassberger, C; Daartz, J; Dowdell, S; Ruggieri, T; Sharp, G; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2014-06-15

    Purpose: Evaluate Monte Carlo (MC) dose calculation and the prediction of the treatment planning system (TPS) in a lung phantom and compare them in a cohort of 20 lung patients treated with protons. Methods: A 2-dimensional array of ionization chambers was used to evaluate the dose across the target in a lung phantom. 20 lung cancer patients on clinical trials were re-simulated using a validated Monte Carlo toolkit (TOPAS) and compared to the TPS. Results: MC increases dose calculation accuracy in lung compared to the clinical TPS significantly and predicts the dose to the target in the phantom within ±2%: the average difference between measured and predicted dose in a plane through the center of the target is 5.6% for the TPS and 1.6% for MC. MC recalculations in patients show a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. The lower dose correlates significantly with aperture size and the distance of the tumor to the chest wall (Spearman's p=0.0002/0.004). For large tumors MC also predicts consistently higher V{sub 5} and V{sub 10} to the normal lung, due to a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target can show large deviations, though this effect is very patient-specific. Conclusion: Advanced dose calculation techniques, such as MC, would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. This would increase the accuracy of the relationships between dose and effect, concerning tumor control as well as normal tissue toxicity. As the role of proton therapy in the treatment of lung cancer continues to be evaluated in clinical trials, this is of ever-increasing importance. This work was supported by National Cancer Institute Grant R01CA111590.

  11. TH-A-19A-03: Impact of Proton Dose Calculation Method On Delivered Dose to Lung Tumors: Experiments in Thorax Phantom and Planning Study in Patient Cohort

    International Nuclear Information System (INIS)

    Purpose: Evaluate Monte Carlo (MC) dose calculation and the prediction of the treatment planning system (TPS) in a lung phantom and compare them in a cohort of 20 lung patients treated with protons. Methods: A 2-dimensional array of ionization chambers was used to evaluate the dose across the target in a lung phantom. 20 lung cancer patients on clinical trials were re-simulated using a validated Monte Carlo toolkit (TOPAS) and compared to the TPS. Results: MC increases dose calculation accuracy in lung compared to the clinical TPS significantly and predicts the dose to the target in the phantom within ±2%: the average difference between measured and predicted dose in a plane through the center of the target is 5.6% for the TPS and 1.6% for MC. MC recalculations in patients show a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. The lower dose correlates significantly with aperture size and the distance of the tumor to the chest wall (Spearman's p=0.0002/0.004). For large tumors MC also predicts consistently higher V5 and V10 to the normal lung, due to a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target can show large deviations, though this effect is very patient-specific. Conclusion: Advanced dose calculation techniques, such as MC, would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. This would increase the accuracy of the relationships between dose and effect, concerning tumor control as well as normal tissue toxicity. As the role of proton therapy in the treatment of lung cancer continues to be evaluated in clinical trials, this is of ever-increasing importance. This work was supported by National Cancer Institute Grant R01CA111590

  12. Dose reassessment by using PTTL method in MTS-N (LiF:Mg, Ti) thermoluminescent detectors

    International Nuclear Information System (INIS)

    The thermoluminescent (TLD) method is one of the most commonly used in dose measurements in radiation protection dosimetry. Due to its many advantages this method is widely spread. However, TLD has especially one disadvantage which is very inconvenient: the dose information in already read detectors is erased and in routine standard way the dose can not be reassessed. The positive is that this shortcoming can be eliminated by applying UV radiation. After first readout the same detector can be subjected to UV exposure and then read once again to reassess the dose. This method for reassessment of dose is based on phototransferred thermoluminescence (PTTL). In an irradiated TL detector deep traps are not emptied during the first readout. During exposure to UV, electrons are transferred from deep traps to shallower dosimetric traps. This TL signal emerging during the second readout following UV illumination is called phototransferred thermoluminescence. A method for reassessment of dose in a previously readout TLD is presented in this work. Experiments show that the method works well within region of doses between 5 and 50 mGy, but could be applied for higher doses as well. The efficiency of dose reassessment reaches about 17 percent of the first readout. The method could be a noticeable improvement in TLD dosimetry, giving more opportunities for better control and reliability of measurements. -- Highlights: ► PTTL method applied in individual dosimetry. ► The optimal wavelength was found. ► Dose reassessment in emergency situations

  13. Characteristics of radiation dose accumulation and methods of dose calculation for internal inflow of 137Cs into experimental rats body

    International Nuclear Information System (INIS)

    Problem of formation doses are considered at the peroral entering of 137Cs in the organism of laboratory rats. First the functions of isotopes retention and values of biokinetic constants have been determined for different organs and tissues. Multicamerate model for description of biokinetics of radionuclides in the organism is proposed. Advantages of application of this model for estimation of absorbed doses are discussed in comparison to existent models

  14. Minimum dose method for walking-path planning of nuclear facilities

    International Nuclear Information System (INIS)

    Highlights: • For radiation environment, the environment model is proposed. • For the least dose walking path problem, a path-planning method is designed. • The path-planning virtual–real mixed simulation program is developed. • The program can plan walking path and simulate. - Abstract: A minimum dose method based on staff walking road network model was proposed for the walking-path planning in nuclear facilities. A virtual–reality simulation program was developed using C# programming language and Direct X engine. The simulation program was used in simulations dealing with virtual nuclear facilities. Simulation results indicated that the walking-path planning method was effective in providing safety for people walking in nuclear facilities

  15. A simple method for the determination of the neutron dose in a phantom

    International Nuclear Information System (INIS)

    A method based on a combination of physical integration and activation threshold detectors was developed to determine the volume averaged dose equivalent rates produced by 14.1 MeV incident neutrons in a water filled phantom. To obtain the spectral fluence of neutrons in phantoms activation threshold detector measurements and a least-squares unfolding code (LSQ) were used. The physical integration was carried out by stirring the phantom solution after irradiation. The method is suitable also to determine the energy averaged conversion factor between the maximum dose equivalent and the primary fast neutron fluence measured on the surface of the phantom. The method proposed can be applied for any kind of phantom geometry. (orig.)

  16. DocLite: A Docker-Based Lightweight Cloud Benchmarking Tool

    OpenAIRE

    Varghese, Blesson; Subba, Lawan Thamsuhang; Thai, Long; Barker, Adam

    2016-01-01

    Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the D...

  17. A practical method for skin dose estimation in interventional cardiology based on fluorographic DICOM information

    International Nuclear Information System (INIS)

    A practical method for skin dose estimation for interventional cardiology patients has been developed to inform pre-procedure planning and post-procedure patient management. Absorbed dose to the patient skin for certain interventional radiology procedures can exceed thresholds for deterministic skin injury, requiring documentation within the patient notes and appropriate patient follow-up. The primary objective was to reduce uncertainty associated with current methods, particularly surrounding field overlap. This was achieved by considering rectangular field geometry incident on a spherical patient model in a polar coordinate system. The angular size of each field was quantified at surface of the sphere, i.e. the skin surface. Computer-assisted design software enabled the modelling of a sufficient dataset that was subsequently validated with radiochromic film. Modelled overlap was found to agree with overlap measured using film to within 2.2 deg. ± 2.0 deg., showing that the overall error associated with the model was <1 %. Mathematical comparison against exposure data extracted from procedural Digital Imaging and Communication in Medicine files was used to generate a graphical skin dose map, demonstrating the dose distribution over a sphere centred at the interventional reference point. Dosimetric accuracy of the software was measured as between 3.5 and 17 % for different variables. (authors)

  18. ARN Training on Advance Methods for Internal Dose Assessment: Application of Ideas Guidelines

    International Nuclear Information System (INIS)

    Dose assessment in case of internal exposure involves the estimation of committed effective dose based on the interpretation of bioassay measurement, and the assumptions of hypotheses on the characteristics of the radioactive material and the time pattern and the pathway of intake. The IDEAS Guidelines provide a method to harmonize dose evaluations using criteria and flow chart procedures to be followed step by step. The EURADOS Working Group 7 'Internal Dosimetry', in collaboration with IAEA and Czech Technical University (CTU) in Prague, promoted the 'EURADOS/IAEA Regional Training Course on Advanced Methods for Internal Dose Assessment: Application of IDEAS Guidelines' to broaden and encourage the use of IDEAS Guidelines, which took place in Prague (Czech Republic) from 2-6 February 2009. The ARN identified the relevance of this training and asked for a place for participating on this activity. After that, the first training course in Argentina took place from 24-28 August for training local internal dosimetry experts. This paper resumes the main characteristics of this activity. (authors)

  19. Method for estimating occupational doses to staff in diagnostic X-ray departments

    International Nuclear Information System (INIS)

    Because of the lack of personal monitoring data, a method for estimating the doses received by diagnostic radiological workers, using the normalized work load, is suggested. The primary mathematical mode of the method is as follows: Dsub(i)=pΣsub(j)Σsub(k)rsub(k)Wsub(ijk). Wsub(ijk) is the work load of individual i working at the place under condition k of radiation protection for number of years j; rsub(k) is the correction coefficient for condition; k of radiation protection (normalized coefficient); Dsub(i) is the dose received by a radiological worker; and P is the dose received by the person per normalized work load of 103 person-times. The value of P was determined by means of personal dose monitoring, and it was about 26.3 mGy/103 person-times. In general, the normalized coefficients were affected by the following factors: the condition of radiation protection; effective emission quanta of X-rays per person-time (mA . s) in different kinds of examinations and the quality of X-rays. The normalized coefficients have been estimated. If the work load of a radiological worker is known, the Wsub(ijk) can be worked out by dividing the whole work load into the work loads of different types. Otherwise, the average work load for all diagnostic X-ray workers can be used. (Author)

  20. A practical method for skin dose estimation in interventional cardiology based on fluorographic DICOM information.

    Science.gov (United States)

    Matthews, Lucy; Dixon, Matthew; Rowles, Nick; Stevens, Greg

    2016-03-01

    A practical method for skin dose estimation for interventional cardiology patients has been developed to inform pre-procedure planning and post-procedure patient management. Absorbed dose to the patient skin for certain interventional radiology procedures can exceed thresholds for deterministic skin injury, requiring documentation within the patient notes and appropriate patient follow-up. The primary objective was to reduce uncertainty associated with current methods, particularly surrounding field overlap. This was achieved by considering rectangular field geometry incident on a spherical patient model in a polar coordinate system. The angular size of each field was quantified at surface of the sphere, i.e. the skin surface. Computer-assisted design software enabled the modelling of a sufficient dataset that was subsequently validated with radiochromic film. Modelled overlap was found to agree with overlap measured using film to within 2.2° ± 2.0°, showing that the overall error associated with the model was < 1 %. Mathematical comparison against exposure data extracted from procedural Digital Imaging and Communication in Medicine files was used to generate a graphical skin dose map, demonstrating the dose distribution over a sphere centred at the interventional reference point. Dosimetric accuracy of the software was measured as between 3.5 and 17 % for different variables. PMID:25994848

  1. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  2. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  3. Bootstrap resampling: a powerful method of assessing confidence intervals for doses from experimental data

    International Nuclear Information System (INIS)

    Bootstrap resampling provides a versatile and reliable statistical method for estimating the accuracy of quantities which are calculated from experimental data. It is an empirically based method, in which large numbers of simulated datasets are generated by computer from existing measurements, so that approximate confidence intervals of the derived quantities may be obtained by direct numerical evaluation. A simple introduction to the method is given via a detailed example of estimating 95% confidence intervals for cumulated activity in the thyroid following injection of 99mTc-sodium pertechnetate using activity-time data from 23 subjects. The application of the approach to estimating confidence limits for the self-dose to the kidney following injection of 99mTc-DTPA organ imaging agent based on uptake data from 19 subjects is also illustrated. Results are then given for estimates of doses to the foetus following administration of 99mTc-sodium pertechnetate for clinical reasons during pregnancy, averaged over 25 subjects. The bootstrap method is well suited for applications in radiation dosimetry including uncertainty, reliability and sensitivity analysis of dose coefficients in biokinetic models, but it can also be applied in a wide range of other biomedical situations. (author)

  4. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    More than 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The benchmark calculations reported here are part of an ongoing multiyear, multiperson effort to benchmark version 4 of the MCNP code. The MCNP is a Monte Carlo three-dimensional general-purpose, continuous-energy neutron, photon, and electron transport code. It is used around the world for many applications including aerospace, oil-well logging, physics experiments, criticality safety, reactor analysis, medical imaging, defense applications, accelerator design, radiation hardening, radiation shielding, health physics, fusion research, and education. The first phase of the benchmark project consisted of analytic and photon problems. The second phase consists of the ENDF/B-V neutron problems reported in this paper and in more detail in the comprehensive report. A cooperative program being carried out a General Electric, San Jose, consists of light water reactor benchmark problems. A subsequent phase focusing on electron problems is planned

  5. Shielding Benchmark Computational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-09-17

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC).

  6. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  7. Prismatic VHTR neutronic benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, Kevin John, E-mail: connolly@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Rahnema, Farzad, E-mail: farzad@gatech.edu [Nuclear and Radiological Engineering and Medical Physics Programs, George W. Woodruff School, Georgia Institute of Technology, Atlanta, GA (United States); Tsvetkov, Pavel V. [Department of Nuclear Engineering, Texas A& M University, College Station, TX (United States)

    2015-04-15

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point.

  8. Prismatic VHTR neutronic benchmark problems

    International Nuclear Information System (INIS)

    Highlights: • High temperature gas-cooled reactor neutronics benchmark problems. • Description of a whole prismatic VHTR core in its full heterogeneity. • Modeled using continuous energy nuclear data at a representative hot operating temperature. • Benchmark results for core eigenvalue, block-averaged power, and some selected pin fission density results. - Abstract: This paper aims to fill an apparent scarcity of benchmarks based on high temperature gas-cooled reactors. Within is a description of a whole prismatic VHTR core in its full heterogeneity and modeling using continuous energy nuclear data at a representative hot operating temperature. Also included is a core which has been simplified for ease in modeling while attempting to preserve as faithfully as possible the neutron physics of the core. Fuel and absorber pins have been homogenized from the particle level, however, the blocks which construct the core remain strongly heterogeneous. A six group multigroup (discrete energy) cross section set has been developed via Monte Carlo using the original heterogeneous core as a basis. Several configurations of the core have been solved using these two cross section sets; eigenvalue results, block-averaged power results, and some selected pin fission density results are presented in this paper, along with the six-group cross section data, so that method developers may use these problems as a standard reference point

  9. Method for determination of ratio of absorbed doses created by different radiations from two sources

    International Nuclear Information System (INIS)

    The proposed method involves determination of ratio of absorbed doses in a mixed radiation field due to radiations from two different sources, provided that both radiations are of different LET, hence of a different quality factor. A detector used in the method is a tissue-equivalent recombination chamber. Shape of saturation curve of such a chamber depends on LET (on radiation quality). If the shapes of saturation curves are known for the radiations from two sources or for both components of a two-component radiation, then the actual ratio of absorbed dose components created simultaneously by these radiations in the mixed radiation field can be determined, performing relatively simple measurements of the ionization current at two different polarizing voltages applied to the chamber.

  10. Method for determination of ratio of absorbed doses created by different radiations from two sources

    Energy Technology Data Exchange (ETDEWEB)

    Gryzinski, Michal A., E-mail: m.gryzinski@cyf.gov.p [Institute of Atomic Energy, 05-400 Otwock-Swierk (Poland); Zielczynski, Mieczyslaw [Institute of Atomic Energy, 05-400 Otwock-Swierk (Poland); Golnik, Natalia [Institute of Metrology and Biomedical Engineering, Warsaw University of Technology, Sw. A. Boboli 8, 02-525 Warsaw (Poland)

    2010-12-15

    The proposed method involves determination of ratio of absorbed doses in a mixed radiation field due to radiations from two different sources, provided that both radiations are of different LET, hence of a different quality factor. A detector used in the method is a tissue-equivalent recombination chamber. Shape of saturation curve of such a chamber depends on LET (on radiation quality). If the shapes of saturation curves are known for the radiations from two sources or for both components of a two-component radiation, then the actual ratio of absorbed dose components created simultaneously by these radiations in the mixed radiation field can be determined, performing relatively simple measurements of the ionization current at two different polarizing voltages applied to the chamber.

  11. An assessment of methods for monitoring entrance surface dose in fluoroscopically guided interventional procedures

    International Nuclear Information System (INIS)

    In the light of a growing awareness of the risks of inducing skin injuries as a consequence of fluoroscopically guided interventional procedures (FGIPs), this paper compares three methods of monitoring entrance surface dose (ESD). It also reports measurements of ESDs made during the period August 1998 to June 1999 on 137 patients undergoing cardiac, neurological and general FGIPs. Although the sample is small, the results reinforce the need for routine assessments to be made of ESDs in FGIPs. At present, the most reliable and accurate form of ESD measurement would seem to be arrays of TLDs. However, transducer based methods, although likely to be less accurate, have considerable advantages in relation to a continuous monitoring programme. It is also suggested that there may be the potential locally for threshold dose area product (DAP) values to be set for specific procedures. These could be used to provide early warning of the potential for skin injuries. (author)

  12. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  13. About uncertainties related to the indirect method of measuring radiation doses in paediatric radiography

    International Nuclear Information System (INIS)

    Indirect method of measuring radiation doses in diagnostic radiology has played a important role to large-scale dosimetric surveys of paediatric patients. To determine the uncertainties associated with this method is crucial to compare the results surveyed in different radiology departments for optimisation purposes. Entrance surface doses (E.S.D.) received by paediatric patients in chest and skull radiographies were estimated by indirect method in three public hospitals of Belo Horizonte city in Brazil: two general hospitals and a children specialist one. Uncertainties of the entrance doses were calculated from the uncertainties of the output measurements, backscatter factors, patient data and technique factors employed within a 95% confidence limit. In a room of one general hospital, E.S.D. values for diagnostic images of chest were (74 ± 12%) μGy for a one-year old child, (92±11%) μGy for a five-years old child and (135 ± 12%) μGy for a ten-years old child. E.S.D. values in the two radiographic procedures studied for a five-years old child were generally lower than that published by Commission of the European Communities in 1996 and higher than that published by the National Radiological Protection Board in 2000. The uncertainties of the output measurements and technique factors employed (consequence of non standardisation of technique factors) were determinants to the high values of uncertainties found in some rooms. (authors)

  14. The methods of tumors bed localization and boost dose delivery during conservative breast cancer treatment

    International Nuclear Information System (INIS)

    Breast-conserving therapy (BCT) consists of whole breast irradiation, usually at a dose of 50 Gy, and a boost dose of 10-20 Gy to the tumor bed. A decreased risk of local recurrence in patients administered the boost has been confirmed in a large randomized trial. The precise localization of the tumor bed (boost dose volume) is often difficult, while the recommended intraoperative placement of metal markers into the tumor cavity walls is not always performed. It is more common to localize the tumor bed using other available data, such as tumor site at the initial clinical examination or mammograms, skin scar localization, postoperative induration of the mammary gland, and histological examination. All these methods are considered less exact than the volume outlined by surgical clips. Alternative approaches allowing precise delivery of radiotherapy to the tumor bed are intraoperative placement of brachytherapy catheters or intraoperative external beam irradiation. In this review we discuss the methods used of determining the tumor bed and the different radiotherapy boost techniques used in breast cancer patients managed with BCT. We also present guidelines of the American Brachytherapy Society for the use of brachytherapy as a boost method. (author)

  15. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    Energy Technology Data Exchange (ETDEWEB)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  16. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol.2

    International Nuclear Information System (INIS)

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios

  17. A point dose method for in vivo range verification in proton therapy

    International Nuclear Information System (INIS)

    Range uncertainty in proton therapy is a recognized concern. For certain treatment sites, less optimal beam directions are used to avoid the potential risk, but also with reduced benefit. In vivo dosimetry, with implanted or intra-cavity dosimeters, has been widely used for treatment verification in photon/electron therapy. The method cannot, however, verify the beam range for proton treatment, unless we deliver the treatment in a different manner. Specifically, we split the spread-out Bragg peaks in a proton field into two separate fields, each delivering a 'sloped' depth-dose distribution, rather than the usual plateau in a typical proton field. The two fields are 'sloped' in opposite directions so that the total depth-dose distribution retains the constant dose plateau covering the target volume. By measuring the doses received from both fields and calculating the ratio, the water-equivalent path length to the location of the implanted dosimeter can be verified, thus limiting range uncertainty to only the remaining part of the beam path. Production of such subfields has been experimented with a passive scattering beam delivery system. Phantom measurements have been performed to illustrate the application for in vivo beam range verification. (note)

  18. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  19. An empirical method to build up a model of proton dose distribution for a radiotherapy treatment-planning package

    International Nuclear Information System (INIS)

    An empirical method has been developed to build up a three-dimensional proton dose model in a water phantom. The proton beam delivering this dose distribution is constructed as a group of parallel, broad monoenergetic beams. The mean and the standard deviation of their ranges are input to fit the resulting central-axis depth-dose distribution to a pre-specified shape. The model has been used in a radiotherapy treatment-planning package for comparing proton and photon plans. This dose model is approximate and adequate for comparative treatment planning, and is also usable when locally measured proton dose distribution data are not available. (Author)

  20. Prediction of imipramine serum levels in enuretic children by a Bayesian method: comparison with two other conventional dosing methods.

    Science.gov (United States)

    Fernández de Gatta, M M; Tamayo, M; García, M J; Amador, D; Rey, F; Gutiérrez, J R; Domínguez-Gil Hurlé, A

    1989-11-01

    The aim of the present study was to characterize the kinetic behavior of imipramine (IMI) and desipramine in enuretic children and to evaluate the performance of different methods for dosage prediction based on individual and/or population data. The study was carried out in 135 enuretic children (93 boys) ranging in age between 5 and 13 years undergoing treatment with IMI in variable single doses (25-75 mg/day) administered at night. Sampling time was one-half the dosage interval at steady state. The number of data available for each patient varied (1-4) and was essentially limited by clinical criteria. Pharmacokinetic calculations were performed using a simple proportional relationship (method 1) and a multiple nonlinear regression program (MULTI 2 BAYES) with two different options: using the ordinary least-squares method (method 2) and the least-squares method based on the Bayesian algorithm (method 3). The results obtained point to a coefficient of variation for the level/dose ratio of the drug (58%) that is significantly lower than that of the metabolite (101.4%). The forecasting capacity of method 1 is deficient both regarding accuracy [mean prediction error (MPE) = -5.48 +/- 69.15] and precision (root mean squared error = 46.42 +/- 51.39). The standard deviation of the MPE (69) makes the method unacceptable from the clinical point of view. The more information that is available concerning the serum levels, the greater are the accuracy and precision of methods (2 and 3). With the Bayesian method, less information on drug serum levels is needed to achieve clinically acceptable predictions. PMID:2595743

  1. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  2. Remote Sensing Segmentation Benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.

    Piscataway, NJ : IEEE Press, 2012, s. 1-4. ISBN 978-1-4673-4960-4. [IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS). Tsukuba Science City (JP), 11.11.2012] R&D Projects: GA ČR GAP103/11/0335; GA ČR GA102/08/0593 Grant ostatní: CESNET(CZ) 409/2011 Keywords : remote sensing * segmentation * benchmark Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2013/RO/mikes-remote sensing segmentation benchmark.pdf

  3. Comparative study of patient doses calculated with two methods for breast digital tomosynthesis

    International Nuclear Information System (INIS)

    In this study, the average glandular doses (DG) delivered in breast tomosynthesis examinations were estimated over a sample of 150 patients using two different methods. In method 1, the conversion factors air-kerma to DG used were those tabulated by Dance et al. and in method 2 were the ones from Feng et al. The protocol for the examination followed in the unit of this study consists in two views per breast, each view composed by a 2D acquisition and a tomosynthesis scan (3D). The resulting DG values from both methods present statistically significant differences (p=0.02) for the 2D modality and were similar for the 3D scan (p=0.22). The estimated median value of DG for the most frequent breasts (thicknesses between 50 and 60 mm) delivered in a single 3D acquisition is 1.7 mGy (36% and 17% higher than the value for the 2D mode estimated with each method) which lies far below the tolerances established by the Spanish Protocol Quality Control in Radiodiagnostic (2011). The total DG for a tomosynthesis examination (6.0 mGy) is a factor 2.4 higher than the dose delivered in a 2D examination with two views (method 1). (Author)

  4. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities; Metodos de calibracao e de intercomparacao de calibradores de dose utilizados em servicos de medicina nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Alessandro Martins da

    1999-07-01

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  5. An in vivo dose verification method for SBRT–VMAT delivery using the EPID

    International Nuclear Information System (INIS)

    Purpose: Radiation treatments have become increasingly more complex with the development of volumetric modulated arc therapy (VMAT) and the use of stereotactic body radiation therapy (SBRT). SBRT involves the delivery of substantially larger doses over fewer fractions than conventional therapy. SBRT–VMAT treatments will strongly benefit from in vivo patient dose verification, as any errors in delivery can be more detrimental to the radiobiology of the patient as compared to conventional therapy. Electronic portal imaging devices (EPIDs) are available on most commercial linear accelerators (Linacs) and their documented use for dosimetry makes them valuable tools for patient dose verification. In this work, the authors customize and validate a physics-based model which utilizes on-treatment EPID images to reconstruct the 3D dose delivered to the patient during SBRT–VMAT delivery. Methods: The SBRT Linac head, including jaws, multileaf collimators, and flattening filter, were modeled using Monte Carlo methods and verified with measured data. The simulation provides energy spectrum data that are used by their “forward” model to then accurately predict fluence generated by a SBRT beam at a plane above the patient. This fluence is then transported through the patient and then the dose to the phosphor layer in the EPID is calculated. Their “inverse” model back-projects the EPID measured focal fluence to a plane upstream of the patient and recombines it with the extra-focal fluence predicted by the forward model. This estimate of total delivered fluence is then forward projected onto the patient’s density matrix and a collapsed cone convolution algorithm calculates the dose delivered to the patient. The model was tested by reconstructing the dose for two prostate, three lung, and two spine SBRT–VMAT treatment fractions delivered to an anthropomorphic phantom. It was further validated against actual patient data for a lung and spine SBRT–VMAT plan. The

  6. An in vivo dose verification method for SBRT–VMAT delivery using the EPID

    Energy Technology Data Exchange (ETDEWEB)

    McCowan, P. M., E-mail: peter.mccowan@cancercare.mb.ca [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Van Uytven, E.; Van Beek, T.; Asuni, G. [Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); McCurdy, B. M. C. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Department of Radiology, University of Manitoba, 820 Sherbrook Street, Winnipeg, Manitoba R3A 1R9 (Canada)

    2015-12-15

    Purpose: Radiation treatments have become increasingly more complex with the development of volumetric modulated arc therapy (VMAT) and the use of stereotactic body radiation therapy (SBRT). SBRT involves the delivery of substantially larger doses over fewer fractions than conventional therapy. SBRT–VMAT treatments will strongly benefit from in vivo patient dose verification, as any errors in delivery can be more detrimental to the radiobiology of the patient as compared to conventional therapy. Electronic portal imaging devices (EPIDs) are available on most commercial linear accelerators (Linacs) and their documented use for dosimetry makes them valuable tools for patient dose verification. In this work, the authors customize and validate a physics-based model which utilizes on-treatment EPID images to reconstruct the 3D dose delivered to the patient during SBRT–VMAT delivery. Methods: The SBRT Linac head, including jaws, multileaf collimators, and flattening filter, were modeled using Monte Carlo methods and verified with measured data. The simulation provides energy spectrum data that are used by their “forward” model to then accurately predict fluence generated by a SBRT beam at a plane above the patient. This fluence is then transported through the patient and then the dose to the phosphor layer in the EPID is calculated. Their “inverse” model back-projects the EPID measured focal fluence to a plane upstream of the patient and recombines it with the extra-focal fluence predicted by the forward model. This estimate of total delivered fluence is then forward projected onto the patient’s density matrix and a collapsed cone convolution algorithm calculates the dose delivered to the patient. The model was tested by reconstructing the dose for two prostate, three lung, and two spine SBRT–VMAT treatment fractions delivered to an anthropomorphic phantom. It was further validated against actual patient data for a lung and spine SBRT–VMAT plan. The

  7. Improved method to label beta-2 agonists in metered-dose inhalers with technetium-99m

    Energy Technology Data Exchange (ETDEWEB)

    Ballinger, J.R.; Calcutt, L.E.; Hodder, R.V.; Proulx, A.; Gulenchyn, K.Y. (Ottawa Civic Hospital, Ottawa (Canada). Div. of Nuclear Medicine and Respiratory Unit)

    1993-01-01

    Labelling beta-2 agonists in a metered-dose inhaler (MDI) with technetium-99m allows imaging of the deposition of the aerosol in the respiratory tract. We have developed an improved labeling method in which anhydrous pertechnetate is dissolved in a small volume of ethanol, diluted with a fluorocarbon, and introduced into a commercial MDI. Imaging the MDI demonstrated that the [sup 99m]Tc was associated with the active ingredient, not just the propellant. The method has been used successfully with salbutamol and fenoterol MDIs and should be directly applicable to other MDIs which contain hydrophilic drugs. (Author).

  8. A continuous OSL scanning method for analysis of radiation depth-dose profiles in bricks

    International Nuclear Information System (INIS)

    This article describes the development of a method for directly measuring radiation depth-dose profiles from brick, tile and porcelain cores, without the need for sample separation techniques. For the brick cores, examples are shown of the profiles generated by artificial irradiation using the different photon energies from 137Cs and 60Co gamma sources; comparison is drawn with both the theoretical calculations derived from Monte Carlo simulations, as well as experimental measurements made using more conventional optically stimulated luminescence methods of analysis. (Author)

  9. Substantiation of 25 kGy (by use of VDmax25 method) as the sterilization dose

    International Nuclear Information System (INIS)

    The International standards for radiation sterilization require evidence of the effectiveness of a minimum sterilization dose of 25 kGy but do not provide detailed guidance on how this evidence can be generated. Although many of the procedural elements in method VDmax are similar to those of Method 1 of ANSI/AAMI/ISO 11137-2, there are differences that require elaboration. In this project, test procedure of VDmax25 method was established and then validated in Radiation Microbiology Laboratory (RML) test conditions. Beside, this method has been applied successfully for two years in order to reply the demand of the manufacturer firms as routine test service. In the near future, RML will be the only accredited laboratory in Turkey on 'validation of radiation sterilization' standard (ANSI/AAMI/ISO 11137-2: 2006) that consists of VDmax25 method

  10. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  11. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  12. ''FULL-CORE'' VVER-440 calculation benchmark

    International Nuclear Information System (INIS)

    Because of the difficulties with experimental validation of power distribution predicted by macro-code on the pin by pin level we decided to prepare a calculation benchmark named ''FULL-CORE'' VVER-440. This benchmark is a two-dimensional (2D) calculation benchmark based on the VVER-440 reactor core cold state geometry with taking into account the geometry of explicit radial reflector. The main task of this benchmark is to test the pin by pin power distribution in fuel assemblies predicted by macro-codes that are used for neutron-physics calculations especially for VVER-440 reactors. The proposal of this benchmark was presented at the 21st Symposium of AER in 2011. The reference solution has been calculated by MCNP code using Monte Carlo method and the results have been published in the AER community. The results of reference calculation were presented at the 22nd Symposium of AER in 2012. In this paper we will compare the available macro-codes results of this calculation benchmark.

  13. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  14. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  15. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  16. DOSE MEASURMENT IN ULTRAVIOLET DISINFECTION OF WATER AND WASTE WATER BY CHEMICAL METHOD

    Directory of Open Access Journals (Sweden)

    F.Vaezi

    1995-06-01

    Full Text Available Chemical methods ( actinometry depend on the measurement of the extent to which a chemical reaction occurs under the influence of UV light. Two chemical actinometers have been used in this research. In one method, the mixtures of potassium peroxidisuiphate butanol solutions were irradiated for various time intervals, and pH-changes were determined. A linear relationship was observed between these changes and UV-dose applied. In another method, the acidic solutions of ammonium molybdate and ethyl alcohol were irradiated and the intensity of blue color developed was determined by titration with potassium permanganate solutions. The volumes of titrant used were then plotted versus the UV-doses. This showed a linear relationship which could be used for dosimeiry. Both of these actometers proved to be reliable. The first is the method of choice with a view to have much accuracy and the second method is preferred because of its feasibility and having advantages of no need to any equipment and non-accessible raw materials.

  17. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    Chen Chaobin; Huang Qunying; Wu Yican

    2005-01-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  18. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Science.gov (United States)

    Chen, Chaobin; Huang, Qunying; Wu, Yican

    2005-04-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  19. A method applicable to effective dose rate estimates for aircrew dosimetry

    CERN Document Server

    Ferrari, A; Rancati, T

    2001-01-01

    The inclusion of cosmic radiation as occupational exposure under ICRP Publication 60 and the European Union Council Directive 96/29/Euratom has highlighted the need to estimate the exposure of aircrew. According to a report of the Group of Experts established under the terms of Article 31 of the European Treaty, the individual estimates of dose for flights below 15 km may be done using an appropriate computer program. In order to calculate the radiation exposure at aircraft altitudes, calculations have been performed by means of the Monte Carlo transport code FLUKA. On the basis of the calculated results, a simple method is proposed for the individual evaluation of effective dose rate due to the galactic component of cosmic radiation as a function of latitude and altitude. (13 refs).

  20. A method for radiobiological investigations in radiation fields with different LET and high dose rates

    International Nuclear Information System (INIS)

    For investigations: 1. Performed in the field of radiobiology with different LET-radiation and a relatively high background dose rate of one component (e.g. investigations with fast and intermediate reactor neutrons) 2. Concerning radiation risk studies within a wide range 3. Of irradiations, covering a long time period (up to 100 days) a test system is necessary which on the one hand makes it possible to analyze the influence of different LET radiation and secondly shows a relative radiation resistant behaviour and allows a simple cell cycle regulation. A survey is given upon the installed device of a simple cell observation method, the biological test system used and the analysis of effects caused by dose, repair and LET. It is possible to analyze the behaviour of the nonsurvival cells and to demonstrate different reactions of the test parameters to the radiation of different LET. (author)

  1. Radiation Dose Reduction Methods For Use With Fluoroscopic Imaging, Computers And Implications For Image Quality

    Science.gov (United States)

    Edmonds, E. W.; Hynes, D. M.; Rowlands, J. A.; Toth, B. D.; Porter, A. J.

    1988-06-01

    The use of a beam splitting device for medical gastro-intestinal fluoroscopy has demonstrated that clinical images obtained with a 100mm photofluorographic camera, and a 1024 X 1024 digital matrix with pulsed progressive readout acquisition techniques, are identical. In addition, it has been found that clinical images can be obtained with digital systems at dose levels lower than those possible with film. The use of pulsed fluoroscopy with intermittent storage of the fluoroscopic image has also been demonstrated to reduce the fluoroscopy part of the examination to very low dose levels, particularly when low repetition rates of about 2 frames per second (fps) are used. The use of digital methods reduces the amount of radiation required and also the heat generated by the x-ray tube. Images can therefore be produced using a very small focal spot on the x-ray tube, which can produce further improvement in the resolution of the clinical images.

  2. A method for calculation of dose per unit concentration values for aquatic biota

    International Nuclear Information System (INIS)

    A dose per unit concentration database has been generated for application to ecosystem assessments within the FASSET framework. Organisms are represented by ellipsoids of appropriate dimensions, and the proportion of radiation absorbed within the organisms is calculated using a numerical method implemented in a series of spreadsheet-based programs. Energy-dependent absorbed fraction functions have been derived for calculating the total dose per unit concentration of radionuclides present in biota or in the media they inhabit. All radionuclides and reference organism dimensions defined within FASSET for marine and freshwater ecosystems are included. The methodology has been validated against more complex dosimetric models and compared with human dosimetry based on ICRP 72. Ecosystem assessments for aquatic biota within the FASSET framework can now be performed simply, once radionuclide concentrations in target organisms are known, either directly or indirectly by deduction from radionuclide concentrations in the surrounding medium

  3. Absorbed dose measurements in mammography using Monte Carlo method and ZrO2+PTFE dosemeters

    International Nuclear Information System (INIS)

    Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO2+PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)

  4. A method for measuring personnel neutron doses through induced changes in molecular weight of cellulose nitrate

    International Nuclear Information System (INIS)

    In this work, a new method for measuring fast neutron doses has been achieved through measuring the induced changes in the average molecular weight of cellulose nitrate (CN) foils after being irradiated with fast neutrons. The mean molecular weight of the irradiated CN samples has been determined through measuring changes in viscosity of CN solutions in ethyl acetate of different concentrations. An empirical formula for calculating the change in the mean molecular weight of CN after being irradiated with fission neutrons, and to doses over the range 0.6-104 Rad, has been proposed and found to fit the experimental data within +-6.3%. The effect of neutron energies on the induced changes in the mean molecular weight of CN has been studied. Moreover, the fading of these changes after storage at temperatures of 40 and 600C and for periods up to 48 h have been investigated. (Auth.)

  5. Use of rank sum method in identifying high occupational dose jobs for ALARA implementation

    International Nuclear Information System (INIS)

    The cost-effective reduction of occupational radiation exposure (ORE) dose at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORE dose data of existing plants. It is necessary to identify what are high ORE jobs for ALARA implementation. In this study, the Rank Sum Method (RSM) is used in identifying high ORE jobs. As a case study, the database of ORE-related maintenance and repair jobs for Kori Units 3 and 4 is used for assessment, and top twenty high ORE jobs are identified. The results are also verified and validated using the Friedman test, and RSM is found to be a very efficient way of analyzing the data. (author)

  6. Results of the IAEA-CEC coordinated research programme on radiation doses in diagnostic radiology and methods for reduction

    International Nuclear Information System (INIS)

    In 1991, a Coordinated Research Programme on assessment of radiation doses in diagnostic radiology and studying methods for reduction was started in IAEA Member States in cooperation with the CEC Radiation Protection Research Action. It was agreed to carry out a pilot exercise consisting of assessing patients' Entrance Surface Doses, followed by: analysis of the relevant parameters; quality control and corrections, and reassessment of doses where applicable. The results show that dose reduction was achieved without deterioration of the diagnostic information of the images, by applying simple and inexpensive methods. (Author)

  7. Using MCNP and Monte Carlo method for Investigation of dose field of Irradiation facility at Hanoi Irradiation Center

    International Nuclear Information System (INIS)

    MCNP and Monte Carlo method was used to calculate dose rate in the air-space of irradiation room at Hanoi Irradiation Center. Experiment measurements were also carried out to investigate the real distribution of dose field in air of the irradiator as well as the distribution of absorbed dose in sample product containers. The results show that there is a deviation between calculated data given by MCNP and measurements. The data of MCNP give a symmetric distribution of dose field against the axes going through the center of the source rack meanwhile the experiment data show that dose rate get higher values in the lower part of the space. Going to lower position to the floor dose rate getting higher value. This phenomenon was also occurred for the measurements of absorbed dose in sample product container. (author)

  8. Benchmarking the World's Best

    Science.gov (United States)

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  9. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  10. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  11. NAS Parallel Benchmarks Results

    Science.gov (United States)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  12. Comparison of passive and active radon measurement methods for personal occupational dose assessment

    Directory of Open Access Journals (Sweden)

    Hasanzadeh Elham

    2016-01-01

    Full Text Available To compare the performance of the active short-term and passive long-term radon measurement methods, a study was carried out in several closed spaces, including a uranium mine in Iran. For the passive method, solid-state nuclear track detectors based on Lexan polycarbonate were utilized, for the active method, AlphaGUARD. The study focused on the correlation between the results obtained for estimating the average indoor radon concentrations and consequent personal occupational doses in various working places. The repeatability of each method was investigated, too. In addition, it was shown that the radon concentrations in different stations of the continually ventilated uranium mine were comparable to the ground floor laboratories or storage rooms (without continual ventilation and lower than underground laboratories.

  13. BN-600 MOX Core Benchmark Analysis. Results from Phases 4 and 6 of a Coordinated Research Project on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects

    International Nuclear Information System (INIS)

    For those Member States that have or have had significant fast reactor development programmes, it is of utmost importance that they have validated up to date codes and methods for fast reactor physics analysis in support of R and D and core design activities in the area of actinide utilization and incineration. In particular, some Member States have recently focused on fast reactor systems for minor actinide transmutation and on cores optimized for consuming rather than breeding plutonium; the physics of the breeder reactor cycle having already been widely investigated. Plutonium burning systems may have an important role in managing plutonium stocks until the time when major programmes of self-sufficient fast breeder reactors are established. For assessing the safety of these systems, it is important to determine the prediction accuracy of transient simulations and their associated reactivity coefficients. In response to Member States' expressed interest, the IAEA sponsored a coordinated research project (CRP) on Updated Codes and Methods to Reduce the Calculational Uncertainties of the LMFR Reactivity Effects. The CRP started in November 1999 and, at the first meeting, the members of the CRP endorsed a benchmark on the BN-600 hybrid core for consideration in its first studies. Benchmark analyses of the BN-600 hybrid core were performed during the first three phases of the CRP, investigating different nuclear data and levels of approximation in the calculation of safety related reactivity effects and their influence on uncertainties in transient analysis prediction. In an additional phase of the benchmark studies, experimental data were used for the verification and validation of nuclear data libraries and methods in support of the previous three phases. The results of phases 1, 2, 3 and 5 of the CRP are reported in IAEA-TECDOC-1623, BN-600 Hybrid Core Benchmark Analyses, Results from a Coordinated Research Project on Updated Codes and Methods to Reduce the

  14. A new method for gamma dose-rate estimation of heterogeneous media in TL dating

    International Nuclear Information System (INIS)

    In this paper we develop a new method for gamma dose rate estimation of heterogeneous archaeological deposits. This method is based upon a computerised reconstruction of the gamma irradiating environment of the sample to be dated when applying any paleodosimetric methods such as thermoluminescence (TL), optically stimulated luminescence (OSL) and electron spin resonance (ESR). If the deposits overlying the sample to be dated have already been excavated, the missing upper environment (i.e. the relative position, the shape and the size of each lithologic component) is graphically reconstructed using the information recorded in field documents. For this purpose, the space surrounding the dated sample, within a 50 cm radius sphere, is decomposed into spherical volume elements, contiguous and centred on the dated sample. Within each volume element, the proportion of each lithilogic component is estimated. The K, U and Th contents of each lithologic component are determined. This enables us to quantify the effective radiochemical composition of any lithologic component. The relative weight of each volume element, which is related to the absorption properties of the γ rays by the radioactive system being studied (i.e. the dated sample and the surrounding environment) is estimated by a computation whose potentialities and limitations are discussed. Both this reconstruction and in situ radioactivity measurements were applied at the cave known as 'Grotte XVI' in Dordogne (southwestern France), in order to assess the γ dose-rate of TL dated burnt sediments extracted from a Mousterian combustion structure. In spite of its complexity, this reconstruction method yields a more suitable and more accurate determination of the environmental dose rate than the classical and/or simplified approaches

  15. Experimental method for calculation of effective doses in interventional radiology; Metodo experimental para calculo de dosis efectivas en radiologia intervencionista

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz Lblanca, M. D.; Diaz Romero, F.; Casares Magaz, O.; Garrido Breton, C.; Catalan Acosta, A.; Hernandez Armas, J.

    2013-07-01

    This paper proposes a method that allows you to calculate the effective dose in any interventional radiology procedure using an anthropomorphic mannequin Alderson RANDO and dosimeters TLD 100 chip. This method has been applied to an angio Radiology procedure: the biliary drainage. The objectives that have been proposed are: to) put together a method that, on an experimental basis, allows to know dosis en organs to calculate effective dose in complex procedures and b) apply the method to the calculation of the effective dose of biliary drainage. (Author)

  16. Two dimensional shielding benchmark analysis for sodium

    International Nuclear Information System (INIS)

    Results of the analysis of a shielding benchmark experiment on 'fast reactor source' neutron transport through 1.8 metres of sodium is presented in this paper. The two dimensional discrete ordinates code DOT and DLC 37 coupled neutron-gamma multigroup cross sections were used in the analyses. These calculations are compared with measurements on: (i) neutron spectral distribution given by activation detector response, and (ii) gamma ray doses. The agreement is found to be within ± 30 per cent in the fast spectrum region, and within a factor 3.5 in thermal region. For gammas these calculations overpredict the dose rate by a factor of four. (author)

  17. Regulatory guide relating to the determination of whole-body doses due to internal radiation exposure (principles and methods)

    International Nuclear Information System (INIS)

    This compilation defines the principles and methods to be applied for determining the doses emanating from internal radiation exposure in persons with dose levels exceeding the critical levels defined in the ''Regulatory guide for health physics controls''. The obligatory procedure is intended to guarantee that measurements and interpretations of personnel doses and intakes are done on a standardized basis by a standardized procedure, so as to obtain comparable results. (orig.)

  18. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    OpenAIRE

    Jahn, Franziska; Baltschukat, Klaus; Buddrus, Uwe; Günther, Uwe; Kutscha, Ansgar; Liebe, Jan-David; Lowitsch, Volker; Schlegel, Helmut; Winter, Alfred

    2015-01-01

    Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benc...

  19. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  20. A method for comparison of animal and human alveolar dose and toxic effect of inhaled ozone

    International Nuclear Information System (INIS)

    Present models for predicting the pulmonary toxicity of O3 in humans from the toxic effects observed in animals rely on dosimetric measurements of O3 mass balance and species comparisons of mechanisms that protect tissue against O3. The goal of the study described was to identify a method to directly compare O3 dose and effect in animals and humans using bronchoalveolar lavage fluid markers. The feasibility of estimating O3 dose to alveoli of animals and humans was demonstrated through assay of reaction products of 18O-labeled O3 in lung surfactant and macrophage pellets of rabbits. The feasibility of using lung lavage fluid protein measurements to quantify the O3 toxic response in humans was demonstrated by the finding of significantly increased lung lavage protein in 10 subjects exposed to 0.4 ppm O3 for 2 h with intermittent periods of heavy exercise. The validity of using the lavage protein marker to quantify the response in animals has already been established. The positive results obtained in both the 18O3 and the lavage protein studies reported here suggest that it should be possible to obtain a direct comparison of both alveolar dose and toxic effect of O3 to alveoli of animals or humans