Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1
Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addresses many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.
Bayesian Benchmark Dose Analysis
Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.
2014-01-01
An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...
Dose Rate Experiment at JET for Benchmarking the Calculation Direct One Step Method
Neutrons produced by D-D and D-T plasmas induce the activation of tokamak materials and of components. The development of reliable methods to assess dose rates is a key issue for maintenance and operating nuclear machines, in normal and off-normal conditions. In the frame of the EFDA Fusion Technology work programme, a computational tool based upon MCNP Monte Carlo code has been developed to predict the dose rate after shutdown: it is called Direct One Step Method (D1S). The D1S is an innovative approach in which the decay gammas are coupled to the neutrons as in the prompt case and they are transported in one single step in the same run. Benchmarking of this new tool with experimental data taken in a complex geometry like that of a tokamak is a fundamental step to test the reliability of the D1S method. A dedicated benchmark experiment was proposed for the 2005-2006 experimental campaign of JET. Two irradiation positions have been selected for the benchmark: one inner position inside the vessel, not far from the plasma, called the 2 upper irradiation end (IE2), where neutron fluence is relatively high. The second position is just outside a vertical port in an external position (EX). Here the neutron flux is lower and the dose rate to be measured is not very far from the residual background. Passive detectors are used for in-vessel measurements: the high sensitivity Thermo Luminescent Dosimeters (TLDs) GR-200A (natural LiF), which ensure measurements down to environmental dose level. An active detector of Geiger-Muller (GM) type is used for out of vessel dose rate measurement. Before their use the detectors were calibrated in a secondary gamma-ray standard (Cs-137 and Co-60) facility in term of air-kerma. The background measurement was carried-out in the period July -September 2005 in the outside position EX using the GM tube and in September 2005 inside the vacuum vessel using TLD detectors located in the 2 Upper irradiation end IE2. In the present work
Improvement and benchmarking of the new shutdown dose estimation method by Monte Carlo code
In the ITER (international thermonuclear experimental reactor,) project calculations of the dose rate after shutdown are very important and their results are critical for the machine design. A new method has been proposed which makes use of MCNP also for decay gamma-ray transport calculations. The objective is to have an easy tool giving results affected by low uncertainty due to the modeling or simplifications in the flux shape assumptions. Further improvements to this method are here presented. This methodology has been developed, in the ITER frame, in a limited case in which the radioactivity comes only from the Vacuum Vessel (made of stainless steel) till a time around few days after ITER shutdown. Further improvement is required to make it applicable to more general cases (at different times and/or with different materials). Some benchmark results are shown. Discrepancies between the different methods are due mainly to the different cross section used. Agreement with available ad hoc experiment is very good. (orig.)
Wu, Xiaosheng; Wei, Shuai; Wei, Yimin; Guo, Boli; Yang, Mingqi; Zhao, Duoyong; Liu, Xiaoling; Cai, Xianfeng
2012-08-01
Pigs were exposed to cadmium (Cd) (in the form of CdCl(2)) concentrations ranging from 0 to 32mg Cd/kg feed for 100 days. Urinary cadmium (U-Cd) and blood cadmium (B-Cd) levels were determined as indicators of Cd exposure. Urinary levels of β(2)-microglobulin (β(2)-MG), α(1)-microglobulin (α(1)-MG), N-acetyl-β-D-glucosaminidase (NAG), cadmium-metallothionein (Cd-MT), and retinol binding protein (RBP) were determined as biomarkers of tubular dysfunction. U-Cd concentrations were increased linearly with time and dose, whereas B-Cd reached two peaks at 40 days and 100 days in the group exposed to 32mg Cd/kg. Hyper-metallothionein-urinary (HyperMTuria) and hyper-N-acetyl-β-D-glucosaminidase-urinary (hyperNAGuria) emerged from 80 days onwards in the group exposed to 32mg Cd/kg feed, followed by hyper-β2-microglobulin-urinary (hyperβ2-MGuria) and hyper-retinol-binding-protein-urinary (hyperRBPuria) from 100 days onwards. The relationships between the Cd exposure dose and biomarkers of exposure (as well as the biomarkers of effect) were examined, and significant correlations were found between them (except for α(1)-MG). Dose-response relationships between Cd exposure dose and biomarkers of tubular dysfunction were studied. The critical concentration of Cd exposure dose was calculated by the benchmark dose (BMD) method. The BMD(10)/BMDL(10) was estimated to be 1.34/0.67, 1.21/0.88, 2.75/1.00, and 3.73/3.08mg Cd/kg feed based on urinary RBP, NAG, Cd-MT, and β(2)-MG, respectively. The calculated tolerable weekly intake of Cd for humans was 1.4 μg/kg body weight based on a safety factor of 100. This value is lower than the currently available values set by several different countries. This indicates a need for further studies on the effects of Cd and a re-evaluation of the human health risk assessment for the metal. PMID:22610606
Entropy-based benchmarking methods
Temurshoev, Umed
2012-01-01
We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...
Effects of exposure imprecision on estimation of the benchmark dose
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe
2004-01-01
, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study......In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored...
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
Benchmarking Learning and Teaching: Developing a Method
Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah
2006-01-01
Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…
Numerical methods: Analytical benchmarking in transport theory
Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered
Purpose: The Linear Boltzmann Transport Equation (LBTE) solved through statistical Monte Carlo (MC) method provides the accurate dose calculation in radiotherapy. This work is to investigate the alternative way for accurately solving LBTE using deterministic numerical method due to its possible advantage in computational speed from MC. Methods: Instead of using traditional spherical harmonics to approximate angular scattering kernel, our deterministic numerical method directly computes angular scattering weights, based on a new angular discretization method that utilizes linear finite element method on the local triangulation of unit angular sphere. As a Result, our angular discretization method has the unique advantage in positivity, i.e., to maintain all scattering weights nonnegative all the time, which is physically correct. Moreover, our method is local in angular space, and therefore handles the anisotropic scattering well, such as the forward-peaking scattering. To be compatible with image-guided radiotherapy, the spatial variables are discretized on the structured grid with the standard diamond scheme. After discretization, the improved sourceiteration method is utilized for solving the linear system without saving the linear system to memory. The accuracy of our 3D solver is validated using analytic solutions and benchmarked with Geant4, a popular MC solver. Results: The differences between Geant4 solutions and our solutions were less than 1.5% for various testing cases that mimic the practical cases. More details are available in the supporting document. Conclusion: We have developed a 3D LBTE solver based on a new angular discretization method that guarantees the positivity of scattering weights for physical correctness, and it has been benchmarked with Geant4 for photon dose calculation
On the Extrapolation with the Denton Proportional Benchmarking Method
Marco Marini; Tommaso Di Fonzo
2012-01-01
Statistical offices have often recourse to benchmarking methods for compiling quarterly national accounts (QNA). Benchmarking methods employ quarterly indicator series (i) to distribute annual, more reliable series of national accounts and (ii) to extrapolate the most recent quarters not yet covered by annual benchmarks. The Proportional First Differences (PFD) benchmarking method proposed by Denton (1971) is a widely used solution for distribution, but in extrapolation it may suffer when the...
Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)
EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...
Performance Benchmarking of Fast Multipole Methods
Al-Harthi, Noha A.
2013-06-01
The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware
A unified framework for benchmark dose estimation applied to mixed models and model averaging
Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.
2013-01-01
This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...
Benchmarking of Remote Sensing Segmentation Methods
Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.
2015-01-01
Roč. 8, č. 5 (2015), s. 2240-2248. ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 3.026, year: 2014 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf
Issues in benchmarking human reliability analysis methods : a literature review.
Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)
2008-04-01
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.
Issues in benchmarking human reliability analysis methods: A literature review
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.
Measurement Methods in the field of benchmarking
István Szűts
2004-05-01
Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.
Benchmark calculations of neutron dose rates at transport and storage casks
The application of numerical calculations methods for demonstration of sufficient radiation shielding of radioactive waste transport and storage casks requires a validation based on appropriate measurements of gamma and neutron sources. The results of the comparison of measured data and calculations using the Monte Carlo program MCNP show deviations dependent on the loading of the cask within the standard deviation which is dominated by the measuring method. Considering the neutrons scattered at the salt MCNP (in case of disposal in the salt) tends to underestimate the nominal values, but still within the double standard deviation. This accuracy is not reached with MAVRIC. Based on AHE (active handling experiments) data benchmark calculations were performed that can be used as reference value. The total accuracy results from the accuracy of the source term and the measurement of the neutron dose rate with a deviation of 15%.
A heterogeneous analytical benchmark for particle transport methods development
A heterogeneous analytical benchmark has been designed to provide a quality control measure for large-scale neutral particle computational software. Assurance that particle transport methods are efficiently implemented and that current codes are adequately maintained for reactor and weapons applications is a major task facing today's transport code developers. An analytical benchmark, as used here, refers to a highly accurate evaluation of an analytical solution to the neutral particle transport equation. Because of the requirement of an analytical solution, however, only relatively limited transport scenarios can be treated. To some this may seem to be a major disadvantage of analytical benchmarks. However, to the code developer, simplicity by no means diminishes the usefulness of these benchmarks since comprehensive transport codes must perform adequately for simple as well as comprehensive transport scenarios
Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results
Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)
2013-01-01
Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.
Grandjean, Philippe; Budtz-Joergensen, Esben
2013-01-01
follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children with...
A biosegmentation benchmark for evaluation of bioimage analysis methods
Kvilekval Kristian
2009-11-01
Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.
Application of Benchmark Dose (BMD) in Estimating Biological Exposure Limit (BEL) to Cadmium
无
2007-01-01
Objective To estimate the biological exposure limit (BEL) using benchmark dose (BMD) based on two sets of data from occupational epidemiology. Methods Cadmium-exposed workers were selected from a cadmium smelting factory and a zinc product factory. Doctors, nurses or shop assistants living in the same area served as a control group. Urinary cadmium (UCd) was used as an exposure biomarker and urinary β2-microgloburin (B2M), N-acetyl-β-D-glucosaminidase (NAG) and albumin (ALB) as effect biomarkers. All urine parameters were adjusted by urinary creatinine. Software of BMDS (Version 1.3.2, EPA.U.S.A) was used to calculate BMD. Results The cut-off point (abnormal values) was determined based on the upper limit of 95% of effect biomarkers in control group. There was a significant dose response relationship between the effect biomarkers (urinary B2M, NAG, and ALB) and exposure biomarker (UCd). BEL value was 5 μg/g creatinine for UB2M as an effect biomarker, consistent with the recommendation of WHO. BEL could be estimated by using the method of BMD. BEL value was 3 μg/g creatinine for UNAG as an effect biomarker. The more sensitive the used biomarker is, the more occupational population will be protected. Conclusion BMD can be used in estimating the biological exposure limit (BEL). UNAG is a sensitive biomarker for estimating BEL after cadmium exposure.
D.C. Blitz (David)
2011-01-01
textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W
Benchmark of GW methods for azabenzenes
Marom, Noa; Caruso, Fabio; Ren, Xinguo; Rubio Secades, Ángel; Scheffler, Matthias; Rinke, Patrick
2012-01-01
Many-body perturbation theory in the GW approximation is a useful method for describing electronic properties associated with charged excitations. A hierarchy of GW methods exists, starting from non-self-consistent G0W0, through partial self-consistency in the eigenvalues (ev-scGW) and in the Green function (scGW0), to fully self-consistent GW (scGW). Here, we assess the performance of these methods for benzene, pyridine, and the diazines. The quasiparticle spectra are compared to photoemissi...
BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS
Brotherton, Kevin
2009-04-30
The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.
Benchmarking of methods for genomic taxonomy
Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana;
2014-01-01
One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is--that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology......; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species...
Kenneth T. Bogen
2010-01-01
Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses (“hormesis”). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characteri...
Modeling the emetic potencies of food-borne trichothecenes by benchmark dose methodology.
Male, Denis; Wu, Wenda; Mitchell, Nicole J; Bursian, Steven; Pestka, James J; Wu, Felicia
2016-08-01
Trichothecene mycotoxins commonly co-contaminate cereal products. They cause immunosuppression, anorexia, and emesis in multiple species. Dietary exposure to such toxins often occurs in mixtures. Hence, if it were possible to determine their relative toxicities and assign toxic equivalency factors (TEFs) to each trichothecene, risk management and regulation of these mycotoxins could become more comprehensive and simple. We used a mink emesis model to compare the toxicities of deoxynivalenol, 3-acetyldeoxynivalenol, 15-acetyldeoxynivalenol, nivalenol, fusarenon-X, HT-2 toxin, and T-2 toxin. These toxins were administered to mink via gavage and intraperitoneal injection. The United States Environmental Protection Agency (EPA) benchmark dose software was used to determine benchmark doses for each trichothecene. The relative potencies of each of these toxins were calculated as the ratios of their benchmark doses to that of DON. Our results showed that mink were more sensitive to orally administered toxins than to toxins administered by IP. T-2 and HT-2 toxins caused the greatest emetic responses, followed by FX, and then by DON, its acetylated derivatives, and NIV. Although these results provide key information on comparative toxicities, there is still a need for more animal based studies focusing on various endpoints and combined effects of trichothecenes before TEFs can be established. PMID:27292944
Benchmarking Methods in the Regulation of Electricity Distribution System Operators
Janda, Karel; Krska, Stepan
2014-01-01
This paper examines the regulation of distribution system operators (DSOs) focused on the Czech electricity market. It presents an international benchmarking study based on data of 15 regional DSOs including two Czech operators. The study examines the application of yardstick methods using data envelopment analysis (DEA) and stochastic frontier analysis (SFA). We find that the cost efficiency of each of the Czech DSOs is different, which indicates a suitability of introduction of individual e...
Budtz-Jørgensen, Esben; Bellinger, David; Lanphear, Bruce; Grandjean, Philippe
2013-01-01
Lead is a recognized neurotoxicant, but estimating effects at the lowest measurable levels is difficult. An international pooled analysis of data from seven cohort studies reported an inverse and supra-linear relationship between blood lead concentrations and IQ scores in children. The lack of a...... clear threshold presents a challenge to the identification of an acceptable level of exposure. The benchmark dose (BMD) is defined as the dose that leads to a specific known loss. As an alternative to elusive thresholds, the BMD is being used increasingly by regulatory authorities. Using the pooled data...... fitting models yielding lower confidence limits (BMDLs) of about 0.1-1.0 μ g/dL for the dose leading to a loss of one IQ point. We conclude that current allowable blood lead concentrations need to be lowered and further prevention efforts are needed to protect children from lead toxicity....
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark
Renner, F.; Wulff, J.; Kapsch, R.-P.; Zink, K.
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
An adaptive nonparametric method in benchmark analysis for bioassay and environmental studies
Bhattacharya, Rabi; Lin, Lizhen
2010-01-01
We present a novel nonparametric method for bioassay and benchmark analysis in risk assessment, which averages isotonic MLEs based on disjoint subgroups of dosages. The asymptotic theory for the methodology is derived, showing that the MISEs (mean integrated squared error) of the estimates of both the dose-response curve F and its inverse F−1 achieve the optimal rate O(N−4/5). Also, we compute the asymptotic distribution of the estimate ζ~p of the effective dosage ζp = F−1 (p) which is shown ...
Methodical Fundamentals of Consumer Cooperatives Trade Enterprises Benchmarking Management
Dvirko Yuriy V.
2012-01-01
The article deals with organizational and methodical fundamentals of Ukrainian consumer cooperatives trade enterprises benchmarking management. There was offered the author’s approach upon essence, objects, aims, principals, tasks and benchmarking management models under conditions of modern business.В статье исследованы организационно-методические основы управления бенчмаркингом торговых предприятий потребительской кооперации Украины. Предложено авторское видение сущности, объектов, целей, п...
On request of the Netherlands government FEL-TNO is developing a decision support system with the acronym RAMBOS for the assessment of the off-site consequences of an accident with hazardous materials. This is a user friendly interactive computer program, which uses very sophisticated graphical means. RAMBOS supports the emergency planning organization in two ways. Firstly, the risk to the residents in the surroundings of the accident is quantified in terms of severity and magnitude (number of casualties, etc.). Secondly, the consequences of countermeasures, such as sheltering and evacuation, are predicted. By evaluating several countermeasures the user can determine an optimum policy to reduce the impact of the accident. Within the framework of the EC project 'Benchmark exercise on dose estimation in a regulatory context' on request of the Ministry of Housing, Physical Planning and Environment calculations were carried out with the RAMBOS system. This report contains the results of these calculations. 3 refs.; 2 figs.; 10 tabs
Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks
Fujii, K [Graduate School of Medicine, Nagoya University, Nagoya, JP (Japan); UCLA School of Medicine, Los Angeles, CA (United States); Bostani, M; Cagnon, C; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States)
2015-06-15
Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.
Calculations of EURACOS iron benchmark experiment using the HYBRID method
In this paper, the HYBRID method is used in the calculations of the iron benchmark experiment at the EURACOS-II device. The saturation activities of the 32S(n,p)32P reaction at different depths in an iron block are computed with ENDF/B-IV data to compare with the measurements. At the outer layers of the iron block, the HYBRID calculation gives increasingly higher results than the VITAMIN-C multigroup calculation. With the adjustment of the two- to one-dimensional ratios, the HYBRID results agree with the measurements to within 10% at most penetration depths, a considerable improvement over the VITAMIN-C multigroup results. The development of a collapsing method for the HYBRID cross sections provides a more direct and practical way of using the HYBRID method in the two-dimensional calculations. It is observed that half of the window effect is smeared in the collapsing treatment, but it still provides a better cross-section set than the VITAMIN-C cross sections for the deep-penetration calculations
Puton, T.; Kozlowski, L. P.; Rother, K. M.; Bujnicki, J. M.
2013-01-01
We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative perfor...
Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more
Comparison of Benchmarking Methods with and without a Survey Error Model
Chen, Zhao-Guo; Ho Wu, Ka
2006-01-01
For a target socio-economic variable, two sources of data with different precisions and collecting frequencies may be available. Typically, the less frequent data (e.g., annual report or census) are more reliable and are considered as benchmarks. The process of using them to adjust the more frequent and less reliable data (e.g., repeated monthly surveys) is called benchmarking. ¶ In this paper, we show the relationship among three types of benchmarking methods in the literature, namely the De...
A design of benchmarking method for assessing performance of e-Government systems
Mushi, Cleopa John
2008-01-01
This paper is an initial work towards developing an e-Government benchmarking model that is user-centric. To achieve the goal then, public service delivery is discussed first including the transition to online public service delivery and the need for providing public services using electronic media. Two major e-Government benchmarking methods are critically discussed and the need to develop a standardized benchmarking model that is user-centric is presented. To properly articulate user requir...
Piloting a Process Maturity Model as an e-Learning Benchmarking Method
Petch, Jim; Calverley, Gayle; Dexter, Hilary; Cappelli, Tim
2007-01-01
As part of a national e-learning benchmarking initiative of the UK Higher Education Academy, the University of Manchester is carrying out a pilot study of a method to benchmark e-learning in an institution. The pilot was designed to evaluate the operational viability of a method based on the e-Learning Maturity Model developed at the University of…
Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening
Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon
2014-01-01
Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artif...
Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings
Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.
Three anisotropic benchmark problems for adaptive finite element methods
Šolín, Pavel; Čertík, O.; Korous, L.
2013-01-01
Roč. 219, č. 13 (2013), s. 7286-7295. ISSN 0096-3003 R&D Projects: GA AV ČR IAA100760702 Institutional support: RVO:61388998 Keywords : benchmark problem * anisotropic solution * boundary layer Subject RIV: BA - General Mathematics Impact factor: 1.600, year: 2013
SMORN-III benchmark test on reactor noise analysis methods
A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)
Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity
Yu.V. Dvirko
2013-03-01
Full Text Available The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalization big companies, enterprises, organizations realize the necessity of the thorough and profound research of the best achievements of market subjects relations with their further using in their own activity. Benchmarking is the process of competitive advantages borrowing and competitiveness increasing of Consumer Cooperatives trade enterprises at the expense of research leaning and adapting the best methods of business processes realization with the purpose to increase their functioning affectivity and best satisfaction of societal needs.The main goals of benchmarking using in Consumer Cooperatives are the following: increasing of needs satisfaction level at the expense of products quality increasing, transportation goods term shortening, service quality increasing; enterprise potential strengthening, competitiveness strengthening, image improvement; generation and new ideas and innovative decisions implementation in trade enterprise activity. The advantages of benchmarking using in Consumer Cooperatives trade enterprises activity are the following: adapting the parameters of enterprise functioning to market demands; gradual defining and removing inadequacies which obstacle enterprise development; borrowing the best methods of further enterprise development; competitive advantages gaining; technological innovations; employees motivation. Authors classification of benchmarking is represented by the following components: by cycle durability strategic, operative
Benchmark Experiment of Dose Rate Distributions Around the Gamma Knife Medical Apparatus
Dose rate measurements around a gamma knife apparatus were performed by using an ionization chamber. Analyses have been performed by using the Monte Carlo code MCNP-5. The nuclear library used for the dose rate distribution of 60Co was MCPLIB04. The calculation model was prepared with a high degree of fidelity, such as the position of each Cobalt source and shielding materials. Comparisons between measured results and calculated ones were performed, and a very good agreement was observed. It is concluded that the Monte Carlo calculation method with its related nuclear data library is very effective for such a complicated radiation oncology apparatus
Benchmark Experiment of Dose Rate Distributions Around the Gamma Knife Medical Apparatus
Oishi, K.; Kosako, K.; Kobayashi, Y.; Sonoki, I.
2014-06-01
Dose rate measurements around a gamma knife apparatus were performed by using an ionization chamber. Analyses have been performed by using the Monte Carlo code MCNP-5. The nuclear library used for the dose rate distribution of 60Co was MCPLIB04. The calculation model was prepared with a high degree of fidelity, such as the position of each Cobalt source and shielding materials. Comparisons between measured results and calculated ones were performed, and a very good agreement was observed. It is concluded that the Monte Carlo calculation method with its related nuclear data library is very effective for such a complicated radiation oncology apparatus.
Benchmark Experiment of Dose Rate Distributions Around the Gamma Knife Medical Apparatus
Oishi, K., E-mail: koji_oishi@shimz.co.jp [Institute of Technology, Shimizu Corporation, Tokyo (Japan); Kosako, K. [Institute of Technology, Shimizu Corporation, Tokyo (Japan); Kobayashi, Y.; Sonoki, I. [Giken Kogyo Co., Ltd., Tokyo (Japan)
2014-06-15
Dose rate measurements around a gamma knife apparatus were performed by using an ionization chamber. Analyses have been performed by using the Monte Carlo code MCNP-5. The nuclear library used for the dose rate distribution of {sup 60}Co was MCPLIB04. The calculation model was prepared with a high degree of fidelity, such as the position of each Cobalt source and shielding materials. Comparisons between measured results and calculated ones were performed, and a very good agreement was observed. It is concluded that the Monte Carlo calculation method with its related nuclear data library is very effective for such a complicated radiation oncology apparatus.
Benchmarking Gas Path Diagnostic Methods: A Public Approach
Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene
2008-01-01
Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.
Bogen, Kenneth T
2011-01-01
Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses ("hormesis"). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characterizes and tests objectively for hormetic trend. Using 100 simulated dichotomous-data sets (5 dose groups, 50 animals/group), sampled from each of seven risk functions, GHS estimators performed about as well or better than BMDS estimators, and a surprising observation was that BMDS mis-specified all of six non-hormetic sampled risk functions most or all of the time. When applied to data on rodent tumors induced by the genotoxic chemical carcinogen anthraquinone (AQ), the GHS model yielded significantly negative estimates of net potency exhibited by the combined rodent data, suggesting that-consistent with the anti-leukemogenic properties of AQ and structurally similar quinones-environmental AQ exposures do not likely increase net cancer risk. In addition to its simplicity and flexibility, the GHS approach offers a unified, consistent approach to quantifying environmental chemical risk. PMID:21731536
High integrity reference trajectory for benchmarking land navigation data fusion methods
Betaille, David; CHAPELON, Antoine; Lusetti, Benoît; KAIS, Mikaël; MILLESCAMPS, Damien
2007-01-01
AbstractIn the framework of a joint initiative of several French laboratories that investigate land navigation, the authors have designed an architecture and tests protocol for benchmarking altogether data fusion methods applied on a collection of sensors covering the complete range of quality. Special attention has been given to sensors data timestamping since the benchmarking is based on the comparison of computed trajectories with the reference trajectory, so called because its compu...
Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings
Matson, Nance E.; Piette, Mary Ann
2005-09-05
This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.
Framework for benchmarking online retailing performance using fuzzy AHP and TOPSIS method
M. Ahsan Akhtar Hasin
2012-08-01
Full Text Available Due to increasing penetration of internet connectivity, on-line retail is growing from the pioneer phase to increasing integration within people's lives and companies' normal business practices. In the increasingly competitive environment, on-line retail service providers require systematic and structured approach to have cutting edge over the rival. Thus, the use of benchmarking has become indispensable to accomplish superior performance to support the on-line retail service providers. This paper uses the fuzzy analytic hierarchy process (FAHP approach to support a generic on-line retail benchmarking process. Critical success factors for on-line retail service have been identified from a structured questionnaire and literature and prioritized using fuzzy AHP. Using these critical success factors, performance levels of the ORENET an on-line retail service provider is benchmarked along with four other on-line service providers using TOPSIS method. Based on the benchmark, their relative ranking has also been illustrated.
Dose estimation by biological methods
The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)
Biological dosimetry - Dose estimation method using biomakers
The individual radiation dose estimation is an important step in the radiation risk assessment. In case of radiation incident or radiation accident, sometime, physical dosimetry method can not be used for calculating the individual radiation dose, the other complement method such as biological dosimetry is very necessary. This method is based on the quantitative specific biomarkers induced by ionizing radiation, such as dicentric chromosomes, translocations, micronuclei... in human peripheral blood lymphocytes. The basis of the biological dosimetry method is the close relationship between the biomarkers and absorbed dose or dose rate; the effects of in vitro and in vivo are similar, so it is able to generate the calibration dose-effect curve in vitro for in vivo assessment. Possibilities and perspectives for performing biological dosimetry method in radiation protection area are presented in this report. (author)
Benchmarking with the multigroup diffusion high-order response matrix method
The benchmarking capabilities of the high-order response matrix eigenvalue method, which was developed more than a decade ago, are demonstrated by means of the numerical analysis of a variety of two-dimensional Cartesian geometry light-water reactor test problems. These problems are typical of those generally used for the benchmarking of coarse-mesh (nodal) diffusion methods and the numerical results show that the high-order response matrix eigenvalue method is well suited to be used as an alternative to fine-mesh finite-difference and refined mesh nodal methods for the purpose of generating reference solutions to such problems. (author)
Simplified CCSD(T)-F12 methods: theory and benchmarks.
Knizia, Gerald; Adler, Thomas B; Werner, Hans-Joachim
2009-02-01
The simple and efficient CCSD(T)-F12x approximations (x = a,b) we proposed in a recent communication [T. B. Adler, G. Knizia, and H.-J. Werner, J. Chem. Phys. 127, 221106 (2007)] are explained in more detail and extended to open-shell systems. Extensive benchmark calculations are presented, which demonstrate great improvements in basis set convergence for a wide variety of applications. These include reaction energies of both open- and closed-shell reactions, atomization energies, electron affinities, ionization potentials, equilibrium geometries, and harmonic vibrational frequencies. For all these quantities, results better than the AV5Z quality are obtained already with AVTZ basis sets, and usually AVDZ treatments reach at least the conventional AVQZ quality. For larger molecules, the additional cost for these improvements is only a few percent of the time for a standard CCSD(T) calculation. For the first time ever, total reaction energies with chemical accuracy are obtained using valence-double-zeta basis sets. PMID:19206955
Methods of bone marrow dose calculation
Several methods of bone marrow dose calculation for photon irradiation were analised. After a critical analysis, the author proposes the adoption, by the Instituto de Radioprotecao e Dosimetria/CNEN, of Rosenstein's method for dose calculations in Radiodiagnostic examinations and Kramer's method in case of occupational irradiation. It was verified by Eckerman and Simpson that for monoenergetic gamma emitters uniformly distributed within the bone mineral of the skeleton the dose in the bone surface can be several times higher than dose in skeleton. In this way, is also proposed the Calculation of tissue-air ratios for bone surfaces in some irradiation geometries and photon energies to be included in the Rosenstein's method for organ dose calculation in Radiodiagnostic examinations. (Author)
In 2001, an international cooperation on the 3D radiation transport benchmarks for simple geometries with void region was performed under the leadership of E. Sartori of OECD/NEA. There were contributions from eight institutions, where 6 contributions were by the discrete ordinate method and only two were by the spherical harmonics method. The 3D spherical harmonics program FFT3 by the finite Fourier transformation method has been improved for this presentation, and benchmark solutions for the 2D and 3D simple geometries with void region by the FFT2 and FFT3 are given showing fairly good accuracy. (authors)
Gamma dose estimation with the thermoluminescence method
Kumamoto, Yoshikazu [National Inst. of Radiological Sciences, Chiba (Japan)
1994-03-01
Absorbed dose in radiation accidents can be estimated with the aid of materials which have the ability of dose recording and were exposed during the accidents. Quartz in bricks and tiles used to construct the buildings has the thermoluminescent properties. These materials exposed to radiations emit light when heated. Quartz and ruby have been used for the estimation of dose. The requirements for such dosemeters include; (1)the high kiln temperature at which all thermoluminescent energies accrued from natural radiations are erased. (2)the negligible fading of thermoluminescent energies after the exposure to radiations. (3)the determination of dose from natural radiations after the making of the matcrials. (4)the geometry of the place at which materials are collected. Bricks or tiles are crushed in the motar, sieved into sizes, washed with HF, HCl, alchol, aceton and water, and given with a known calibration dose. The pre-dose method and high-temperature method are used. In the former, glow curves with and without calibration dose are recorded. In the latter, glow peaks at 110degC with and without calibration dose are recorded after the heating of quartz up to 500degC. In this report, the method of the sample preparation, the measurement procedures and the results of dose estimation in the atomic bombing, iridium-192 and Chernobyl accident are described. (author).
The recently developed Hybrid Diffusion-Transport Spatial Homogenization (DTH) Method was previously tested on a benchmark problem typical of a boiling water reactor. In this paper, the DTH method is tested in a 1-D benchmark problem based on the Japanese High Temperature Test Reactor (HTTR). This acts as a verification of the method for a reactor that is optically thinner than the original BWR test benchmark. (author)
Methods of assessing total doses integrated across pathways
future years. C) Construct Individuals with high rates of consumption or occupancy across all pathways are used to derive rates for each pathway. These are applied in future years. D) Top-Two High and average consumption and occupancy rates for each pathway are derived. Doses can be calculated for all combinations where two pathways are considered at high rates and the remainder as average E) Profiling A profile is derived by calculating consumption and occupancy rates for each pathway for individuals who exhibit high rates for a single pathway. Other profiles may be built by repeating for other pathways. Total dose is the highest dose for any profile, and that profile becomes known as the critical group. Method A was used as a benchmark, with methods B -E compared according to the previously specified criteria. Overall the profiling method of total dose calculation was adopted due to its favourable overall comparison with the individual method and the homogeneity of the critical group selected. (authors)
Benchmark calculations for evaluation methods of gas volumetric leakage rate
A containment function of radioactive materials transport casks is essential for safe transportation to prevent the radioactive materials from being released into environment. Regulations such as IAEA standard determined the limit of radioactivity to be released. Since is not practical for the leakage tests to measure directly the radioactivity release from a package, as gas volumetric leakages rates are proposed in ANSI N14.5 and ISO standards. In our previous works, gas volumetric leakage rates for several kinds of gas from various leaks were measured and two evaluation methods, 'a simple evaluation method' and 'a strict evaluation method', were proposed based on the results. The simple evaluation method considers the friction loss of laminar flow with expansion effect. The strict evaluating method considers an exit loss in addition to the friction loss. In this study, four worked examples were completed for on assumed large spent fuel transport cask (Type B Package) with wet or dry capacity and at three transport conditions; normal transport with intact fuels or failed fuels, and an accident in transport. The standard leakage rates and criteria for two kinds of leak test were calculated for each example by each evaluation method. The following observations are made based upon the calculations and evaluations: the choked flow model of ANSI method greatly overestimates the criteria for tests ; the laminar flow models of both ANSI and ISO methods slightly overestimate the criteria for tests; the above two results are within the design margin for ordinary transport condition and all methods are useful for the evaluation; for severe condition such as failed fuel transportation, it should pay attention to apply a choked flow model of ANSI method. (authors)
Benchmarking the inelastic neutron scattering soil carbon method
The herein described inelastic neutron scattering (INS) method of measuring soil carbon was based on a new procedure for extracting the net carbon signal (NCS) from the measured gamma spectra and determination of the average carbon weight percent (AvgCw%) in the upper soil layer (~8 cm). The NCS ext...
Benchmark measurements and simulations of dose perturbations due to metallic spheres in proton beams
Monte Carlo simulations are increasingly used for dose calculations in proton therapy due to its inherent accuracy. However, dosimetric deviations have been found using Monte Carlo code when high density materials are present in the proton beamline. The purpose of this work was to quantify the magnitude of dose perturbation caused by metal objects. We did this by comparing measurements and Monte Carlo predictions of dose perturbations caused by the presence of small metal spheres in several clinical proton therapy beams as functions of proton beam range and drift space. Monte Carlo codes MCNPX, GEANT4 and Fast Dose Calculator (FDC) were used. Generally good agreement was found between measurements and Monte Carlo predictions, with the average difference within 5% and maximum difference within 17%. The modification of multiple Coulomb scattering model in MCNPX code yielded improvement in accuracy and provided the best overall agreement with measurements. Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy beams when short drift spaces are involved. - Highlights: • We compared measurements and Monte Carlo predictions of dose perturbations caused by the metal objects in proton beams. • Different Monte Carlo codes were used, including MCNPX, GEANT4 and Fast Dose Calculator. • Good agreement was found between measurements and Monte Carlo simulations. • The modification of multiple Coulomb scattering model in MCNPX code yielded improved accuracy. • Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy
Benchmarking ortholog identification methods using functional genomics data
Hulsen, T.; Huynen, M.A.; de Vlieg, J; Groenen, P.M.A.
2006-01-01
BACKGROUND: The transfer of functional annotations from model organism proteins to human proteins is one of the main applications of comparative genomics. Various methods are used to analyze cross-species orthologous relationships according to an operational definition of orthology. Often the definition of orthology is incorrectly interpreted as a prediction of proteins that are functionally equivalent across species, while in fact it only defines the existence of a common ancestor for a gene...
Some benchmark shielding problems solved by the finite element method
Some of the test cases on bulk shields for the two-dimensional codes MARC, TRIMOM and FELICIT are described. These codes use spherical harmonic expansions for neutron directions and a finite element grid over space. MARC was developed primarily as a reactor physics code with a finite element option and it assumes isotropic scattering. TRIMOM is being developed as a general purpose shielding code for anisotropic scatterers. FELICIT is being developed as a module of TRIMOM for cylindrical systems. All three codes employ continuous trial functions at present. Exploratory work on the use of discontinuous trial functions is described. Discontinuous trial functions permit the splicing of methods which use different angular expansions, so that, for example, transport theory can be used where it is necessary and diffusion theory can be used elsewhere. (author)
Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter
2016-03-01
The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets. PMID:26771261
DFT methods for conjugated materials: From benchmarks to functionals
Sears, John; Bredas, Jean-Luc
2012-02-01
From a theoretical standpoint, many of the problems of interest in the study of pi-conjugated materials for organic electronics applications pose a particular challenge for many modern density functional theory methods. Systematic errors have been observed, for instance, in the description of charge-transfer excitations at donor/acceptor interfaces, in linear and non-linear polarizabilites, as well as in the geometric and electronic properties of conjugated polymers [1,2]. We will discuss recent results in our lab aimed at: (i) understanding the sources of error for some of these problems; (ii) addressing these errors using tuned long-range corrected functionals; and (iii) using these results to guide the development of state-of-the-art methodologies in a new open-source DFT code. [4pt] [1] J. S. Sears, T. Korzdorfer, C. R. Zhang, and J. L. Bredas, J. Chem. Phys. 135 151103 (2011)[0pt] [2] T. Korzdorfer, J. S. Sears, C. Sutton, and J. L. Bredas, J. Chem. Phys., accepted.
Solution of the WFNDEC 2015 eddy current benchmark with surface integral equation method
Demaldent, Edouard; Miorelli, Roberto; Reboud, Christophe; Theodoulidis, Theodoros
2016-02-01
In this paper, a numerical solution of WFNDEC 2015 eddy current benchmark is presented. In particular, the Surface Integral Equation (SIE) method has been employed for numerically solving the benchmark problem. The SIE method represent an effective and efficient alternative to standard numerical solver like Finite Element Method (FEM) when electromagnetic fields need to be calculated in problems involving homogeneous media. The formulation of SIE method allows to properly solve the electromagnetic problem by meshing the surface of the media instead to the complete media volume as done in FEM. The surface meshing enables to describe the problem with a smaller number of unknowns with respect to FEM. This property is directly translated in an obvious gain in terms of CPU time efficiency.
Highlights: • The MCNP4C was used to calculate the gamma ray dose rate spatial distribution in for the SGIF. • Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter was conducted as well. • Good agreements were noticed between the calculated and measured results. • The maximum relative differences were less than 7%, 4% and 4% in the x, y and z directions respectively. - Abstract: A three dimensional model for the Syrian gamma irradiation facility (SGIF) is developed in this paper to calculate the gamma ray dose rate spatial distribution in the irradiation room at the 60Co source board using the MCNP-4C code. Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter is conducted as well to compare the calculated and measured results. Good agreements are noticed between the calculated and measured results with maximum relative differences less than 7%, 4% and 4% in the x, y and z directions respectively. This agreement indicates that the established model is an accurate representation of the SGIF and can be used in the future to make the calculation design for a new irradiation facility
ABSTRACT The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode of action determinations and improves quantitative risk assessments. Previous transcription-based microarra...
The reliability of calculation tools to evaluate and calculate dose rates appearing behind multi-layered shields is important with regard to the certification of transport and storage casks. Actual benchmark databases like SINBAD do not offer such configurations because they were developed for reactor and accelerator purposes. Due to this, a bench-mark-suite based on own experiments that contain dose rates measured in different distances and levels from a transport and storage cask and on a public benchmark to validate Monte-Carlo-transport-codes has been developed. The analysed and summarised experiments include a 60Co point-source located in a cylindrical cask, a 252Cf line-source shielded by iron and polyethylene (PE) and a bare 252Cf source moderated by PE in a concrete-labyrinth with different inserted shielding materials to quantify neutron streaming effects on measured dose rates. In detail not only MCNPTM (version 5.1.6) but also MAVRIC, included in the SCALE 6.1 package, have been compared for photon and neutron transport. Aiming at low deviations between calculation and measurement requires precise source term specification and exact measurements of the dose rates which have been evaluated carefully including known uncertainties. In MAVRIC different source-descriptions with respect to the group-structure of the nuclear data library are analysed for the calculation of gamma dose rates because the energy lines of 60Co can only be modelled in groups. In total the comparison shows that MCNPTM fits very wall to the measurements within up to two standard deviations and that MAVRIC behaves similarly under the prerequisite that the source-model can be optimized. (author)
Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe
2016-06-20
RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods. PMID:27190234
Germain, Pierre-Luc
2016-06-20
RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numerical study of the relativistic two-stream instability completes the set of benchmarking tests
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org [Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, Nebraska 68588 (United States)
2016-01-15
We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.
Benchmarking and Performance Measurement.
Town, J. Stephen
This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…
Household Electricity Demand Forecasting -- Benchmarking State-of-the-Art Methods
Veit, Andreas; Goebel, Christoph; Tidke, Rohit; Doblander, Christoph; Jacobsen, Hans-Arno
2014-01-01
The increasing use of renewable energy sources with variable output, such as solar photovoltaic and wind power generation, calls for Smart Grids that effectively manage flexible loads and energy storage. The ability to forecast consumption at different locations in distribution systems will be a key capability of Smart Grids. The goal of this paper is to benchmark state-of-the-art methods for forecasting electricity demand on the household level across different granularities and time scales ...
Bemis, Jeffrey C; Wills, John W; Bryce, Steven M; Torous, Dorothea K; Dertinger, Stephen D; Slob, Wout
2016-05-01
The application of flow cytometry as a scoring platform for both in vivo and in vitro micronucleus (MN) studies has enabled the efficient generation of high quality datasets suitable for comprehensive assessment of dose-response. Using this information, it is possible to obtain precise estimates of the clastogenic potency of chemicals. We illustrate this by estimating the in vivo and the in vitro potencies of seven model clastogenic agents (melphalan, chlorambucil, thiotepa, 1,3-propane sultone, hydroxyurea, azathioprine and methyl methanesulfonate) by deriving BMDs using freely available BMD software (PROAST). After exposing male rats for 3 days with up to nine dose levels of each individual chemical, peripheral blood samples were collected on Day 4. These chemicals were also evaluated for in vitro MN induction by treating TK6 cells with up to 20 concentrations in quadruplicate. In vitro MN frequencies were determined via flow cytometry using a 96-well plate autosampler. The estimated in vitro and in vivo BMDs were found to correlate to each other. The correlation showed considerable scatter, as may be expected given the complexity of the whole animal model versus the simplicity of the cell culture system. Even so, the existence of the correlation suggests that information on the clastogenic potency of a compound can be derived from either whole animal studies or cell culture-based models of chromosomal damage. We also show that the choice of the benchmark response, i.e. the effect size associated with the BMD, is not essential in establishing the correlation between both systems. Our results support the concept that datasets derived from comprehensive genotoxicity studies can provide quantitative dose-response metrics. Such investigational studies, when supported by additional data, might then contribute directly to product safety investigations, regulatory decision-making and human risk assessment. PMID:26049158
Calculation of the IAEA ADS neutronics benchmark (stage-1) (2D discrete coordinate method)
To study the neutronics for the ADS system, a set of computation software based on discrete ordinate method is selected and established. The set is tested through an IAEA benchmark. In the test process, the understanding and using of this software set are improved. The benchmark is analyzed. The calculations include the effective multiplication factor keff , the required strength of the spallation neutron source for 1.5 GW thermal power, the distribution of power density and the spectrum index, and the void effect at the beginning of life, BOL; the spatial and time-dependent density distribution of various nuclides (actinides and fission products) for burn-up process. The results are given in figures and tables and are consistent with calculations made abroad. The conclusion is that this software set can be applied to the optimization of design study for the ADS system
Implementation of convergence judgment method to OECD/NEA benchmark problems
To improve the slow convergence of fission source distribution, fission source acceleration methods have been so far developed and incorporated in the ordinary Monte Carlo calculations by using the fission matrix eigenvector. In ICNC2003, a convergence judgment method involving an effective initial acceleration procedure is proposed by the authors in the poster presentation, and here, the proposed convergence judgment method is practically applied to the OECD/NEA source convergence benchmark problems. In this application process, some difficulties arise and are investigated to derive a solution to them. (author)
A fission collision separation method has been recently developed to significantly improve the computational efficiency of the COMET response coefficient generator. In this work, the accuracy and efficiency of the new response coefficient generation method is tested in 3D HTTR benchmark problems at both lattice and core levels. In lattice calculations, the surface-to-surface and fission density response coefficients computed by the new method are compared with those directly calculated by the Monte Carlo method. In whole core calculations, the eigenvalues and bundle/pin fission densities predicated by COMET based on the response coefficient libraries generated by the fission collision separation method are compared with those based on the interpolation method as well as the Monte Carlo reference solutions. These comparisons have shown that the new response coefficient generation method is significantly (about 3 times) faster than the interpolations method while its accuracy is close to that of the interpolation method. (author)
Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking
Mircea BOSCOIANU
2013-03-01
Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.
J. F. Vijay
2009-01-01
Full Text Available Problem statement: Estimation models in software engineering are used to predict some important attributes of future entities such as development effort, software reliability and programmers productivity. Among these models, those estimating software effort have motivated considerable research in recent years. Approach: In this study we discussed an available work on the effort estimation methods and also proposed a hybrid method for effort estimation process. As an initial approach to hybrid technology, we developed a simple approach to SEE based on use case models: The "use case point's method". This method is not new, but has not become popular although it is easy to understand and implement. We therefore investigated this promising method, which was inspired by function points analysis. Results: Reliable estimates can be calculated by using our method in a short time with the aid of a spreadsheet. Conclusion: We are planning to extend its applicability to estimate risk and benchmarking measures.
Physical methods for dose determinations in mammography
There is small but significant risk of radiation induced carcinogenesis associated with mammography. High quality mammography is the best method of early breast cancer detection. Besides, image as a basic requirement for an effective diagnosis, radiation protection principles require the radiation dose to the imaged tissue to be as low as compatible with required image quality. Glandular tissues is the most radiosensitive, thus the evaluation of Mean Glandular Dose (MGD) is the most relevant factor for estimation of radiation risk as well as the comparison of performance at different mammographic machines. MGD was estimated using Entrance Surface Air KERMA at the breast surface Kf measured free in air and appropriate conversation factors. Under evaluation were eight mammographic machines at institute of radiology, Skopje and mammographic machines at the Health's centers in Vevchani, Bitola, Prilep, Negotino and Shtip. Estimated values of MGD do not exceed the European reference level (<2mGy), but it can not be generally concluded for all mammography units in Macedonia, until their examination. In the near future all mammography units will be subject of Q C tests and dose measurements. (Author)
Reliable B cell epitope predictions: impacts of method development and improved benchmarking.
Jens Vindahl Kringelum
Full Text Available The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological context not against the antigen monomer. Improper dealing with these aspects leads to many artificial false positive predictions and hence to incorrect low performance values. To demonstrate the impact of proper benchmark definitions, we here present an updated version of the DiscoTope method incorporating a novel spatial neighborhood definition and half-sphere exposure as surface measure. Compared to other state-of-the-art prediction methods, Discotope-2.0 displayed improved performance both in cross-validation and in independent evaluations. Using DiscoTope-2.0, we assessed the impact on performance when using proper benchmark definitions. For 13 proteins in the training data set where sufficient biological information was available to make a proper benchmark redefinition, the average AUC performance was improved from 0.791 to 0.824. Similarly, the average AUC performance on an independent evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve
Woutersen, R A; Jonker, D; Stevenson, H; te Biesebeek, J D; Slob, W
2001-07-01
The OECD study design, aimed at obtaining a no-observed-adverse-effect level (NOAEL), may be suboptimal for deriving a benchmark dose. Therefore the present subacute (28-day) study was carried out to evaluate a multiple dose study design and to compare the results with the common OECD design. Seven groups of 10 female rats each were intragastrically administered corn oil without (controls) or with 50, 150, 300, 450, 600 or 750 mg Rhodorsil Silane/kg body weight/day, once daily (7 days/week) for 4 weeks. From the complete dataset, two subsets were selected, one representing a study design with seven dose groups of five animals (7 x 5 design), the other representing a study design with four dose groups of 10 animals (4 x 10 design). Under the conditions of the present study, the NOAEL for Rhodorsil Silane 198 was assessed at 50 mg/kg body weight/day, based on the data of the 4 x 10 design. The benchmark approach resulted in a benchmark dose of 19 mg/kg body weight/day, based on the data of the 7 x 5 design. Comparison of the results demonstrated that the multiple dose (7 x 5) design led to a more reliable result than the OECD (4 x 10) design, despite the smaller total number of animals. The dose-response analysis showed that at "the NOAEL" the effect on relative spleen weight was larger than 10%, illustrating that at the NOAEL, adverse effects may occur. PMID:11397516
An energy transfer method for 4D Monte Carlo dose calculation.
Siebers, Jeffrey V; Zhong, Hualiang
2008-09-01
This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862
Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL
The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Fast neutron fluence calculation benchmark analysis based on 3D MC-SN bidirectional coupling method
The Monte Carlo (MC)-discrete ordinates (SN) bidirectional coupling method is an efficient approach to solve shielding calculation of the large complex nuclear facility. The test calculation was taken by the application of the MC-SN bidirectional coupling method on the shielding calculation of the large PWR nuclear facility. Based on the characteristics of NUREG/CR-6115 PWR benchmark model issued by the NRC, 3D Monte Carlo code was employed to accurately simulate the structure from the core to the thermal shield and the dedicated model of the calculation parts locating in the pressure vessel, while the TORT was used for the calculation from the thermal shield to the second down-comer region. The transform between particle probability distribution of MC and angular flux density of SN was realized by the interface program to achieve the coupling calculation. The calculation results were compared with MCNP and DORT solutions of benchmark report and satisfactory agreements were obtained. The preliminary validity of feasibility by using the method to solve shielding problem of a large complex nuclear device was proved. (authors)
A simple analytical method for heterogeneity corrections in low dose rate prostate brachytherapy
Hueso-González, Fernando; Vijande, Javier; Ballester, Facundo; Perez-Calatayud, Jose; Siebert, Frank-André
2015-07-01
In low energy brachytherapy, the presence of tissue heterogeneities contributes significantly to the discrepancies observed between treatment plan and delivered dose. In this work, we present a simplified analytical dose calculation algorithm for heterogeneous tissue. We compare it with Monte Carlo computations and assess its suitability for integration in clinical treatment planning systems. The algorithm, named as RayStretch, is based on the classic equivalent path length method and TG-43 reference data. Analytical and Monte Carlo dose calculations using Penelope2008 are compared for a benchmark case: a prostate patient with calcifications. The results show a remarkable agreement between simulation and algorithm, the latter having, in addition, a high calculation speed. The proposed analytical model is compatible with clinical real-time treatment planning systems based on TG-43 consensus datasets for improving dose calculation and treatment quality in heterogeneous tissue. Moreover, the algorithm is applicable for any type of heterogeneities.
The MIRD method of estimating absorbed dose
Weber, D.A.
1991-01-01
The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine.
Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity
Yu.V. Dvirko
2013-01-01
The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalizatio...
A simple method for solar energetic particle event dose forecasting
Bayesian, non-linear regression models or artificial neural networks are used to make predictions of dose and dose rate time profiles using calculated dose and/or dose rates soon after event onset. Both methods match a new event to similar historical events before making predictions for the new events. The currently developed Bayesian method categorizes a new event based on calculated dose rates up to 5 h (categorization window) after event onset. Categories are determined using ranges of dose rates from previously observed SEP events. These categories provide a range of predicted asymptotic dose for the new event. The model then goes on to make predictions of dose and dose rate time profiles out to 120 h beyond event onset. We know of no physical significance to our 5 h categorization window. In this paper, we focus on the efficacy of a simple method for SEP event asymptotic dose forecasting. Instead of making temporal predictions of dose and dose rate, we investigate making predictions of ranges of asymptotic dose using only dose rates at times prior to 5 h after event onset. A range of doses may provide sufficient information to make operational decisions such as taking emergency shelter or commencing/canceling extra-vehicular operations. Specifically, predicted ranges of doses that are found to be insignificant for the effect of interest would be ignored or put on a watch list while predicted ranges of greater significance would be used in the operational decision making progress
Some high-quality reactor physics benchmark experiments are being re-evaluated with today's state-of-the-art methods, particularly using that of detailed 3-dimensional models. One experiment analysed in the framework of the International Reactor Physics Benchmark Experiments (IRPhE) project is SNEAK-7A. This assembly is characterised by a Pu-fuelled fast critical assembly in the Karlsruhe Fast Critical Facility for the purpose of testing cross section data and calculational methods. As the detailed information on the SNEAK-7A benchmark experiment becomes available, the purpose of this paper is to model this experiment as closely as possible to the configuration as it existed in the critical facility. The experimental keff was determined to be 1.0010, which is 29.6 cents supercritical. The realistic modelling of the SNEAK-7A assembly was performed using the DANTSYS code capability for X-Y-Z geometry. The calculated core eigenvalue from THREEDANT is 1.00975. With corrections applied for core plate cell heterogeneity and mesh sizes, the best-estimate core criticality with JEF-2.2-based cross-sections turns out to be 1.01137. While the plate heterogeneity effect from flux redistribution was at first estimated to be as large as 387 pcm from plate cell calculations, it proves to be 142 pcm when the core-wide heterogeneity effects are accounted for. In order to figure out the over-prediction of core eigenvalue, spectral indices are investigated, which led to projecting that the 238U capture cross-sections are being underestimated. This fact is confirmed in the comparison of the central material worth of 238U with the measured value. When the sensitivity of core eigenvalue to the cross section is used, the newly estimated core eigenvalue is 1.00175, which is very close to the measured core eigenvalue, when the 238U capture cross-section was assumed to increase by 5% implied from the comparison of spectral indices. Once the details in the old critical experiments are
Kim, S.J. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kodeli, I.; Sartori, E. [OECD NEA DataBank, 92 - Issy les Moulineaux (France)
2003-07-01
Some high-quality reactor physics benchmark experiments are being re-evaluated with today's state-of-the-art methods, particularly using that of detailed 3-dimensional models. One experiment analysed in the framework of the International Reactor Physics Benchmark Experiments (IRPhE) project is SNEAK-7A. This assembly is characterised by a Pu-fuelled fast critical assembly in the Karlsruhe Fast Critical Facility for the purpose of testing cross section data and calculational methods. As the detailed information on the SNEAK-7A benchmark experiment becomes available, the purpose of this paper is to model this experiment as closely as possible to the configuration as it existed in the critical facility. The experimental keff was determined to be 1.0010, which is 29.6 cents supercritical. The realistic modelling of the SNEAK-7A assembly was performed using the DANTSYS code capability for X-Y-Z geometry. The calculated core eigenvalue from THREEDANT is 1.00975. With corrections applied for core plate cell heterogeneity and mesh sizes, the best-estimate core criticality with JEF-2.2-based cross-sections turns out to be 1.01137. While the plate heterogeneity effect from flux redistribution was at first estimated to be as large as 387 pcm from plate cell calculations, it proves to be 142 pcm when the core-wide heterogeneity effects are accounted for. In order to figure out the over-prediction of core eigenvalue, spectral indices are investigated, which led to projecting that the {sup 238}U capture cross-sections are being underestimated. This fact is confirmed in the comparison of the central material worth of {sup 238}U with the measured value. When the sensitivity of core eigenvalue to the cross section is used, the newly estimated core eigenvalue is 1.00175, which is very close to the measured core eigenvalue, when the {sup 238}U capture cross-section was assumed to increase by 5% implied from the comparison of spectral indices. Once the details in the old critical
Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests
Ciolek, Glenn E
2013-01-01
We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests ...
Methodic of dose planning for WWER-1000 power units
Methods of minimization of dose loads for Zaporozhe NPP personnel were studied. They are aimed decrease the dose limits for reactor personnel to 20 mSv/year on the base of organization and technical improvements and ALARA principle
MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS
We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.
MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS
Ciolek, Glenn E.; Roberge, Wayne G., E-mail: cioleg@rpi.edu, E-mail: roberw@rpi.edu [New York Center for Astrobiology (United States); Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180 (United States)
2013-05-01
We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.
Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests
Ciolek, Glenn E.; Roberge, Wayne G.
2013-05-01
We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.
Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods
Zhang, Hao; Lundegaard, Claus; Nielsen, Morten
2009-01-01
MHCpan methods. Conclusions: The benchmark demonstrated that pan-specific methods do provide accurate predictions also for previously uncharacterized MHC molecules. The NetMHCpan method trained to predict actual binding affinities was consistently top ranking both on quantitative (affinity) and binary (ligand......) data. However, the KISS method trained to predict binary data was one of the best performing when benchmarked on binary data. Finally, a consensus method integrating predictions from the two best-performing methods was shown to improve the prediction accuracy. Associate Editor: Prof. Thomas Lengauer....... emerging pathogens. Methods have recently been published that are able to predict peptide binding to any human MHC class I molecule. In contrast to conventional allele-specific methods, these methods do allow for extrapolation to un-characterized MHC molecules. These pan-specific HLA predictors have not...
Benchmarking of a novel contactless characterisation method for micro thermoelectric modules (μTEMs)
Significant challenges exist in the thermal control of Photonics Integrated Circuits (PICs) for use in optical communications. Increasing component density coupled with greater functionality is leading to higher device-level heat fluxes, stretching the capabilities of conventional cooling methods using thermoelectric modules (TEMs). A tailored thermal control solution incorporating micro thermoelectric modules (μTEMs) to individually address hotspots within PICs could provide an energy efficient alternative to existing control methods. Performance characterisation is required to establish the suitability of commercially-available μTEMs for the operating conditions in current and next generation PICs. The objective of this paper is to outline a novel method for the characterisation of thermoelectric modules (TEMs), which utilises infra-red (IR) heat transfer and temperature measurement to obviate the need for mechanical stress on the upper surface of low compression tolerance (∼0.5N) μTEMs. The method is benchmarked using a commercially-available macro scale TEM, comparing experimental data to the manufacturer's performance data sheet.
Shutdown dose rate (SDDR) inside and around the diagnostics ports of ITER is performed at PPPL/UCLA using the 3-D, FEM, Discrete Ordinates code, ATTILA, along with its updated FORNAX transmutation/decay gamma library. Other ITER partners assess SDDR using codes based on the Monte Carlo (MC) approach (e.g. MCNP code) for transport calculation and the radioactivity inventory code FISPACT or other equivalent decay data libraries for dose rate assessment. To reveal the range of discrepancies in the results obtained by various analysts, an extensive experimental and calculation benchmarking effort has been undertaken to validate the capability of ATTILA for dose rate assessment. On the experimental validation front, the comparison was performed using the measured data from two SDDR experiments performed at the FNG facility, Italy. Comparison was made to the experimental data and to MC results obtained by other analysts. On the calculation validation front, the ATTILA's predictions were compared to other results at key locations inside a calculation benchmark whose configuration duplicates an upper diagnostics port plug (UPP) in ITER. Both serial and parallel version of ATTILA-7.1.0 are used in the PPPL/UCLA analysis performed with FENDL-2.1/FORNAX databases. In the FNG 1st experimental, it was shown that ATTILA's dose rates are largely over estimated (by ∼30–60%) with the ANSI/ANS-6.1.1 flux-to-dose factors whereas the ICRP-74 factors give better agreement (10–20%) with the experimental data and with the MC results at all cooling times. In the 2nd experiment, there is an under estimation in SDDR calculated by both MCNP and ATTILA based on ANSI/ANS-6.1.1 for cooling times up to ∼4 days after irradiation. Thereafter, an over estimation is observed (∼5–10% with MCNP and ∼10–15% with ATTILA). As for the calculation benchmark, the agreement is much better based on ICRP-74 1996 data. The divergence among all dose rate results at ∼11 days cooling time is no
A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.
Domonkos Tikk
Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study
Lothar Eysn
2015-05-01
Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.
Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon
2014-01-01
Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the...
A new multidimensional semi-analytical benchmark capability is developed. The key feature in the solution is the point kernel formulation. The 3D nature of the source is inherited in the flux making this a true multidimensional test. In addition, an efficient numerical scheme, called iterative interpolation, is used to evaluate the required point kernel solution and maintain benchmark accuracy. The EVENT finite element transport algorithm is compared to the point source solution as the first step of embedding the benchmark directly with the EVENT code. Additional code comparisons will be presented. (authors)
Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.;
2015-01-01
A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare the...... traction-separation response in the cohesive zone and the transition phase from quasistatic to fatigue loading for each method. Furthermore, the accuracy of the predicted crack growth rate is studied and compared for each method. It is shown that the method described in [2] is significantly more accurate...... than the other methods [3-6]. Finally, studies are presented of the dependency and sensitivity to the change in different quasi-static material parameters and model specific fitting parameters. It is shown that all the methods except [2] rely on different parameters which are not possible to determine...
Šimkanin, Ján; Hejda, Pavel
2009-01-01
Roč. 53, č. 1 (2009), s. 99-110. ISSN 0039-3169 R&D Projects: GA AV ČR IAA300120704 Institutional research plan: CEZ:AV0Z30120515 Keywords : hydromagnetic dynamos * control volume method * numerical dynamo benchmark * efficiency of parallelization Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 1.000, year: 2009
Sebastian Schunert; Yousry Y. Azmy
2011-05-01
The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally fine mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite first eliminates the aforementioned limitation of fine mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme.
The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally ne mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite rst eliminates the aforementioned limitation of ne mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme. (author)
The KMAT: Benchmarking Knowledge Management.
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method
The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysis that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR
The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim to assess the state-of-the-art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participate in the HF-RBE, which is organised around two study cases: (1) analysis of routine functional test and maintenance procedures, with the aim to assess the probability of test-induced failures, the probability of failures to remain unrevealed, and the potential to initiate transients because of errors performed in the test; and (2) analysis of human actions during an operational transient, with the aim to assess the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. The paper briefly reports how the HF-RBE was structured and gives an overview of the methods that have been used for predicting human reliability in both study cases. The experience in applying these methods is discussed and the results obtained are compared. (author)
Study on shielding design methods for fusion reactors using benchmark experiments
In this study, a series of engineering benchmark experiments have been performed on the critical issues of shielding designs for DT fusion reactors. Based on the experiments, calculational accuracy of shielding design methods used in the ITER conceptual design, discrete ordinates code DOT3.5 and Monte Carlo code MCNP-3, have been estimated, and difficulties on calculational methods have been revealed. Furthermore, the feasibility for shielding designs have been examined with respect to a discrete ordinates code system BERMUDA which is developed to attain high accuracy of calculation. As for neutron streaming in an off-set narrow gap experimental assembly made of stainless steel, DOT3.5 and MCNP-3 codes reproduced the experiments within the accuracy presumed in the ITER conceptual design. DOT3.5 and MCNP-3 codes are available for secondary γ ray nuclear heating in a type 316L stainless steel assembly and neutron streaming in a multi-layered slit experimental assembly, respectively. Moreover, BERMUDA-2DN code is an effective tool as to neutron deep penetration in a type 316L stainless steel assembly and the neutron behavior in a large cavity experimental assembly. (author)
Study on shielding design methods for fusion reactors using benchmark experiments
Nakashima, Hiroshi (Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment)
1992-02-01
In this study, a series of engineering benchmark experiments have been performed on the critical issues of shielding designs for DT fusion reactors. Based on the experiments, calculational accuracy of shielding design methods used in the ITER conceptual design, discrete ordinates code DOT3.5 and Monte Carlo code MCNP-3, have been estimated, and difficulties on calculational methods have been revealed. Furthermore, the feasibility for shielding designs have been examined with respect to a discrete ordinates code system BERMUDA which is developed to attain high accuracy of calculation. As for neutron streaming in an off-set narrow gap experimental assembly made of stainless steel, DOT3.5 and MCNP-3 codes reproduced the experiments within the accuracy presumed in the ITER conceptual design. DOT3.5 and MCNP-3 codes are available for secondary {gamma} ray nuclear heating in a type 316L stainless steel assembly and neutron streaming in a multi-layered slit experimental assembly, respectively. Moreover, BERMUDA-2DN code is an effective tool as to neutron deep penetration in a type 316L stainless steel assembly and the neutron behavior in a large cavity experimental assembly. (author).
Application of a heterogeneous coarse mesh transport method to a MOX benchmark problem
Recently, a coarse mesh transport method was extended to 2-D geometry by coupling Monte Carlo response function calculations to deterministic sweeps for converging the partial currents on the coarse mesh boundaries. More extensive testing of the new method has been performed with the previously published continuous energy benchmark problem, as well as the multigroup C5G7 MOX problem. The effect of the partial current representation in space, for the MOX problem, and in space and energy, for the smaller problem, on the accuracy of the results is the focus of this paper. For the MOX problem, accurate results were obtained with the assumption that the partial currents are piecewise-constant on four spatial segments per coarse mesh interface. Specifically, the errors in the system multiplication factor and the average absolute pin power were 0.12% and 0.68%, respectively. The root mean square and the mean relative pin power errors were 1.15% and 0.56%, respectively. (authors)
Re-analysis of Alaskan benchmark glacier mass-balance data using the index method
Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.
2010-01-01
At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.
Benchmarking as a method of assessment of region’s intellectual potential
P.G. Pererva
2015-12-01
innovative development of regions. It is asked to assess the intellectual potential of the region using benchmarking technology. Evaluation of potential IR regions and its impact on development remains a meaningful and necessary in terms more adequate to the real practice of development of economic systems diagrams, algorithms, models and methods of analysis, forecasting and designing the future. In the recognized global market leaders the largest international image companies constantly and consistently there is an active innovative process where in parallel, as the operational policy evolutionary upgrade policy and strategic developments of radical innovations with significant period from idea to its realization. This second direction uses benchmarking as the research methodology with useful results. Conclusions and directions of further researches. A scientific review of the nature and methods of the use of intellectual capital at this stage of economic development of Ukraine has its own challenges, among which are such areas of study as the structure of this capital, capacity assessment at the level of enterprises, regions and the country as a whole, identify the key factors influencing intellectual capital, evaluation of investment efficiency in support of it. Further studies relate to such unsolved issues as the relationship of the IR with the mechanisms of management of innovative development, efficiency of investments in IR, the need to ensure the intellectual development, assessment of intellectual potential of subjects of economic activity.
Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A
2016-08-22
Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. PMID:27268963
Benchmarking local public libraries using non-parametric frontier methods: A case study of Flanders
Stroobants, Jesse; Bouckaert, Geert
2014-01-01
Being faced with significant budget cuts and continual pressure to do more with less, issues of efficiency and effectiveness became a priority for local governments in most countries. In this context, benchmarking is widely acknowledged as a powerful tool for local performance management and for improving the efficiency and effectiveness of local service delivery. Performance benchmarking exercises are regularly carried out using ratio analysis, by comparing single indicators. Since this appr...
Anomaly detection in OECD Benchmark data using co-variance methods
OECD Benchmark data distributed for the SMORN VI Specialists Meeting in Reactor Noise were investigated for anomaly detection in artificially generated reactor noise benchmark analysis. It was observed that statistical features extracted from covariance matrix of frequency components are very sensitive in terms of the anomaly detection level. It is possible to create well defined alarm levels. (R.P.) 5 refs.; 23 figs.; 1 tab
Maria de Fátima Castro
2015-09-01
Full Text Available Since the last decade of the twentieth century, the healthcare industry is paying attention to the environmental impact of their buildings and therefore new regulations, policy goals, and Building Sustainability Assessment (HBSA methods are being developed and implemented. At the present, healthcare is one of the most regulated industries and it is also one of the largest consumers of energy per net floor area. To assess the sustainability of healthcare buildings it is necessary to establish a set of benchmarks related with their life-cycle performance. They are both essential to rate the sustainability of a project and to support designers and other stakeholders in the process of designing and operating a sustainable building, by allowing the comparison to be made between a project and the conventional and best market practices. This research is focused on the methodology to set the benchmarks for resources consumption, waste production, operation costs and potential environmental impacts related to the operational phase of healthcare buildings. It aims at contributing to the reduction of the subjectivity found in the definition of the benchmarks used in Building Sustainability Assessment (BSA methods, and it is applied in the Portuguese context. These benchmarks will be used in the development of a Portuguese HBSA method.
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu [Nuclear Engineering, Missouri University of Science and Technology, Rolla, Missouri 65409 (United States); Hsieh, Jiang [GE Healthcare, Waukesha, Wisconsin 53188 (United States)
2015-07-15
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer
Measurement of neutron flux spectra in a tungsten benchmark by neutron foil activation method
The nuclear designs of fusion devices such as ITER (international thermonuclear experimental reactor), which is an experimental fusion reactor based on the ''tokamak'' concept, rely on the results of neutron physical calculations. These depend on the knowledge of the neutron and photon flux spectra which is particularly important because it permits to anticipate the possible answers of the whole structure to phenomena such as nuclear heating, tritium breeding, atomic displacements, radiation shielding, power generation and material activation. The flux spectra can be calculated with transport codes, but validating measurements are also required. An important constituent of structural materials and divertor areas of fusion reactors is tungsten. This thesis deals with the measurement of the neutron fluence and neutron energy spectrum in a tungsten assembly by means of multiple foil neutron activation technique. In order to check and qualify the experimental tools and the codes to be used in the tungsten benchmark experiment, test measurements in the D-T and D-D neutron fields of the neutron generator at Technische Universitaet Dresden were performed. The characteristics of the D-D and D-T reactions, used to produce monoenergetic neutrons, together with the selection of activation reactions suitable for fusion applications and details of the activation measurements are presented. Corrections related to the neutron irradiation process and those to the sample counting process are discussed, too. The neutron fluence and its energy distribution in a tungsten benchmark, irradiated at the frascati neutron generator with 14 MeV neutrons produced by the T(d,n)4He reaction, are then derived from the measurements of the neutron induced γ-ray activity in the foils using the STAYNL unfolding code, based on the linear least-squares-errors method, together with the IRDF-90.2 (international reactor dosimetry file) cross section library. The differences between the neutron flux
Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data
Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques. The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected. The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1–5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. (paper)
Benchmarking DFT and semiempirical methods on structures and lattice energies for ten ice polymorphs
Brandenburg, Jan Gerit; Maas, Tilo; Grimme, Stefan
2015-03-01
Water in different phases under various external conditions is very important in bio-chemical systems and for material science at surfaces. Density functional theory methods and approximations thereof have to be tested system specifically to benchmark their accuracy regarding computed structures and interaction energies. In this study, we present and test a set of ten ice polymorphs in comparison to experimental data with mass densities ranging from 0.9 to 1.5 g/cm3 and including explicit corrections for zero-point vibrational and thermal effects. London dispersion inclusive density functionals at the generalized gradient approximation (GGA), meta-GGA, and hybrid level as well as alternative low-cost molecular orbital methods are considered. The widely used functional of Perdew, Burke and Ernzerhof (PBE) systematically overbinds and overall provides inconsistent results. All other tested methods yield reasonable to very good accuracy. BLYP-D3atm gives excellent results with mean absolute errors for the lattice energy below 1 kcal/mol (7% relative deviation). The corresponding optimized structures are very accurate with mean absolute relative deviations (MARDs) from the reference unit cell volume below 1%. The impact of Axilrod-Teller-Muto (atm) type three-body dispersion and of non-local Fock exchange is small but on average their inclusion improves the results. While the density functional tight-binding model DFTB3-D3 performs well for low density phases, it does not yield good high density structures. As low-cost alternative for structure related problems, we recommend the recently introduced minimal basis Hartree-Fock method HF-3c with a MARD of about 3%.
Behar, Joachim; Oster, Julien; Clifford, Gari D
2014-08-01
Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques.The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected.The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1-5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. PMID:25069410
Monitoring methods for skin dose in interventional radiology
Abdulhamid Chaikh
2015-03-01
Full Text Available Interventional radiology makes an increasing use of X-ray for diagnostic and therapeutic procedures. The dose received by the patient sometime exceeds the threshold value of deterministic effects, and this requires monitoring of the dose delivered to the patients. Delivered dose could be assessed through either direct or indirect methods. The direct methods use dosimeters that are placed on the skin during the procedure, whereas, the indirect methods are based on measured quantities derived from the equipment itself. Each method has its own limitations; however, the main concern is the ability to measure the dose more accurately due to complexity of the anatomical structures of the patient and the variable course of each procedure. This review article summarizes the principle and main advantages and disadvantages of each method. A comparison of the performances of each method for interventional fluoroscopy and radiography in its ability to monitor the patient’s skin dose is provided.
The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds
Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; Brown, Forrest B.
2016-03-01
In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR). These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF) method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW) and multi-level Breit-Wigner (MLBW) formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both probability table
Walsh Jonathan A.
2016-01-01
Full Text Available In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR. These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW and multi-level Breit-Wigner (MLBW formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both
A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)
Ford, Donald J.
1993-01-01
Discusses benchmarking, the continuous process of measuring one's products, services, and practices against those recognized as leaders in that field to identify areas for improvement. Examines ways in which benchmarking can benefit human resources functions. (JOW)
Andrea Furková
2007-06-01
Full Text Available This paper explores the aplication of parametric and non-parametric benchmarking methods in measuring cost efficiency of Slovak and Czech electricity distribution companies. We compare the relative cost efficiency of Slovak and Czech distribution companies using two benchmarking methods: the non-parametric Data Envelopment Analysis (DEA and the Stochastic Frontier Analysis (SFA as the parametric approach. The first part of analysis was based on DEA models. Traditional cross-section CCR and BCC model were modified to cost efficiency estimation. In further analysis we focus on two versions of stochastic frontier cost functioin using panel data: MLE model and GLS model. These models have been applied to an unbalanced panel of 11 (Slovakia 3 and Czech Republic 8 regional electricity distribution utilities over a period from 2000 to 2004. The differences in estimated scores, parameters and ranking of utilities were analyzed. We observed significant differences between parametric methods and DEA approach.
Campbell, Akiko
2016-01-01
Benchmarking is a process of comparison between performance characteristics of separate, often competing organizations intended to enable each participant to improve its own performance in the marketplace (Kay, 2007). Benchmarking sets organizations’ performance standards based on what “others” are achieving. Most widely adopted approaches are quantitative and reveal numerical performance gaps where organizations lag behind benchmarks; however, quantitative benchmarking on its own rarely yi...
The Hanford Dose Overview Program is a Hanford site-wide service established to provide a method of assuring the consistency of Hanford-related environmental dose assessments. This document serves as a guide to the Hanford contractors for obtaining or performing Hanford-related environmental dose calculations. The program serves as a focal point for Hanford environmental dose calculation activities and provides a number of services for Hanford contractors involved in calculation of environmental doses. Site specific input data and assumptions have been compiled and are maintained for use by the contractors in calculating Hanford environmental doses. The data and assumptions, to the extent they apply, should be used in Hanford calculations. These data are not all inclusive and will be modified should additional or more appropriate information become available
A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison
Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4 to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4 in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23.000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selected (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. This benchmark shows 2 main points. First, independent replicas are an appropriate method to achieve a fare variance estimation when dominance ratio is near 1. Secondly, the diffusion operator with 2 energy groups gives satisfactory results compared to TRIPOLI-4 even with a highly heterogeneous neutron flux map and an harder spectrum
Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations
Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm2) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within ±1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within ±2.0%(σMC=0.8%) in lung (ρ=0.24 g cm-3) and within ±2.9%(σMC=0.8%) in low-density lung (ρ=0.1 g cm-3). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity (ρ=0.001 g cm-3) were found to be within the range of ±1.5% to ±4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark (σMC=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within ±2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is
Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image
Pirotti, F.; Sunar, F.; Piragnolo, M.
2016-06-01
Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random
This document serves as a guide to Hanford contractors for obtaining or performing Hanford-related environmental dose calculations. Because environmental dose estimation techniques are state-of-the-art and are continually evolving, the data and standard methods presented herein will require periodic revision. This document is scheduled to be updated annually, but actual changes to the program will be made more frequently if required. For this reason, PNL's Occupational and Environmental Protection Department should be contacted before any Hanford-related environmental dose calculation is performed. This revision of the Hanford Dose Overview Program Report primarily reflects changes made to the data and models used in calculating atmospheric dispersion of airborne effluents at Hanford. The modified data and models are described in detail. In addition, discussions of dose calculation methods and the review of calculation results have been expanded to provide more explicit guidance to the Hanford contractors. 19 references, 30 tables
Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m
Czarnota, Karol; Gorbatov, Alexei
2016-04-01
In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single
The US EPA’s N-Methyl Carbamate (NMC) Cumulative Risk assessment was based on the effect on acetylcholine esterase (AChE) activity of exposure to 10 NMC pesticides through dietary, drinking water, and residential exposures, assuming the effects of joint exposure to NMCs is dose-...
Benchmarked Empirical Bayes Methods in Multiplicative Area-level Models with Risk Evaluation
Ghosh, Malay; Kubokawa, Tatsuya; Kawakubo, Yuki
2014-01-01
The paper develops empirical Bayes and benchmarked empirical Bayes estimators of positive small area means under multiplicative models. A simple example will be estimation of per capita income for small areas. It is now well-understood that small area estimation needs explicit, or at least implicit use of models. One potential difficulty with model-based estimators is that the overall estimator for a larger geographical area based on (weighted) sum of the model-based estimators is not necessa...
Boldyreva, Anna
2014-01-01
This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...
A performance geodynamo benchmark
Matsui, H.; Heien, E. M.
2014-12-01
In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. However, to approach the parameters regime of the Earth's outer core, we need massively parallel computational environment for extremely large spatial resolutions. Local methods are expected to be more suitable for massively parallel computation because the local methods needs less data communication than the spherical harmonics expansion, but only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, some numerical dynamo models using spherical harmonics expansion has performed successfully with thousands of processes. We perform benchmark tests to asses various numerical methods to asses the next generation of geodynamo simulations. The purpose of the present benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the present study, we will report the results of the performance benchmark. We perform the participated dynamo models under the same computational environment (XSEDE TACC Stampede), and investigate computational performance. To simplify the problem, we choose the same model and parameter regime as the accuracy benchmark test, but perform the simulations with much finer spatial resolutions as possible to investigate computational capability (e
Methods for monitoring patient dose in dental radiology
Different types of X-ray equipment are used in dental radiology, such as intra-oral, panoramic, cephalo-metric, cone-beam computed tomography (CBCT) and multi-slice computed tomography (MSCT) units. Digital receptors have replaced film and screen-film systems and other technical developments have been made. The radiation doses arising from different types of examination are sparsely documented and often expressed in different radiation quantities. In order to allow the comparison of radiation doses using conventional techniques, i.e. intra-oral, panoramic and cephalo-metric units, with those obtained using, CBCT or MSCT techniques, the same quantities and units of dose must be used. Dose determination should be straightforward and reproducible, and data should be stored for each image and clinical examination. It is shown here that air kerma-area product (PKA) values can be used to monitor the radiation doses used in all types of dental examinations including CBCT and MSCT. However, for the CBCT and MSCT techniques, the methods for the estimation of dose must be more thoroughly investigated. The values recorded can be used to determine the diagnostic standard doses and to set diagnostic reference levels for each type of clinical examination and equipment used. It should also be possible to use these values for the estimation and documentation of organ or effective doses. (authors)
A method of estimating fetal dose during brain radiation therapy
Purpose: To develop a simple method of estimating fetal dose during brain radiation therapy. Methods and Materials: An anthropomorphic phantom was modified to simulate pregnancy at 12 and 24 weeks of gestation. Fetal dose measurements were carried out using thermoluminescent dosimeters. Brain radiation therapy was performed with two lateral and opposed fields using 6 MV photons. Three sheets of lead, 5.1-cm-thick, were positioned over the phantom's abdomen to reduce fetal exposure. Linear and nonlinear regression analysis was used to investigate the dependence of radiation dose to an unshielded and/or shielded fetus upon field size and distance from field isocenter. Results: Formulas describing the exponential decrease of radiation dose to an unshielded and/or shielded fetus with distance from the field isocenter are presented. All fitted parameters of the above formulas can be easily derived using a set of graphs showing their correlation with field size. Conclusion: This study describes a method of estimating fetal dose during brain radiotherapy, accounting for the effects of gestational age, field size and distance from field isocenter. Accurate knowledge of absorbed dose to the fetus before treatment course allows for the selection of the proper irradiation technique in order to achieve the maximum patient benefit with the least risk to the fetus
Hougaard, Jens Leth; Tvede, Mich
2002-01-01
Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added in...... order to obtain a unique selection...
Lawson, Lartey; Nielsen, Kurt
2005-01-01
We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....
Recently, there has been a new word added to our vocabulary - benchmarking. Because of benchmarking, our colleagues travel to power plants all around the world and guests from the European power plants visit us. We asked Marek Niznansky from the Nuclear Safety Department in Jaslovske Bohunice NPP to explain us this term. (author)
Shielding benchmark problems, (2)
Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)
Toxicological Benchmarks for Wildlife
Sample, B.E. Opresko, D.M. Suter, G.W.
1993-01-01
-tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.
Calculation method for gamma-dose rates from spherical puffs
The Lagrangian puff-models are widely used for calculation of the dispersion of atmospheric releases. Basic output from such models are concentrations of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on semi-infinite cloud model. This method is however only applicable for points far away from the release point. The exact calculation of the cloud dose using the volume integral requires significant computer time. The volume integral for the gamma dose could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor due to the fact that the same correction factors are used for all isotopes. The authors describe a more elaborate correction method. This method uses precalculated values of the gamma-dose rate as a function of the puff dispersion parameter (δp) and the distance from the puff centre for four energy groups. The release of energy for each radionuclide in each energy group has been calculated and tabulated. Based on these tables and a suitable interpolation procedure the calculation of gamma doses takes very short time and is almost independent of the number of radionuclides. (au) (7 tabs., 7 ills., 12 refs.)
A unique manual method for emergency offsite dose calculations
This paper describes a manual method developed for performance of emergency offsite dose calculations for PP and L's Susquehanna Steam Electric Station. The method is based on a three-part carbonless form. The front page guides the user through selection of the appropriate accident case and inclusion of meteorological and effluent data data. By circling the applicable accident descriptors, the user circles the dose factors on pages 2 and 3 which are then simply multiplied to yield the whole body and thyroid dose rates at the plant boundary, two, five, and ten miles. The process used to generate the worksheet is discussed, including the method used to incorporate the observed terrain effects on airflow patterns caused by the Susquehanna River Valley topography
Under the auspices of the U.S. Nuclear Regulatory Commission (NRC), Brookhaven National Laboratory (BNL) developed a comprehensive program to evaluate state-of-the-art methods and computer programs for seismic analysis of typical coupled nuclear power plant (NPP) systems with non-classical damping. In this program, four benchmark models of coupled building-piping/equipment systems with different damping characteristics were developed and analyzed by BNL for a suite of earthquakes. The BNL analysis was carried out by the Wilson-θ time domain integration method with the system-damping matrix computed using a synthesis formulation as presented in a companion paper [Nucl. Eng. Des. (2002)]. These benchmark problems were subsequently distributed to and analyzed by program participants applying their uniquely developed methods and computer programs. This paper is intended to offer a glimpse at the program, and provide a summary of major findings and principle conclusions with some representative results. The participant's analysis results established using complex modal time history methods showed good comparison with the BNL solutions, while the analyses produced with either complex-mode response spectrum methods or classical normal-mode response spectrum method, in general, produced more conservative results, when averaged over a suite of earthquakes. However, when coupling due to damping is significant, complex-mode response spectrum methods performed better than the classical normal-mode response spectrum method. Furthermore, as part of the program objectives, a parametric assessment is also presented in this paper, aimed at evaluation of the applicability of various analysis methods to problems with different dynamic characteristics unique to coupled NPP systems. It is believed that the findings and insights learned from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving license
Verification and validation benchmarks.
Oberkampf, William Louis; Trucano, Timothy Guy
2007-02-01
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of
The lead cooled fast reactor benchmark Brest-300: analysis with sensitivity method
Lead cooled fast neutrons reactor is one of the most interesting candidates for the development of atomic energy. BREST-300 is a 300 MWe lead cooled fast reactor developed by the NIKIET (Russia) with a deterministic safety approach which aims to exclude reactivity margins greater than the delayed neutron fraction. The development of innovative reactors (lead coolant, nitride fuel...) and fuel cycles with new constraints such as cycle closure or actinide burning, requires new technologies and new nuclear data. In this connection, the tool and neutron data used for the calculational analysis of reactor characteristics requires thorough validation. NIKIET developed a reactor benchmark fitting of design type calculational tools (including neutron data). In the frame of technical exchanges between NIKIET and EDF (France), results of this benchmark calculation concerning the principal parameters of fuel evolution and safety parameters has been inter-compared, in order to estimate the uncertainties and validate the codes for calculations of this new kind of reactors. Different codes and cross-sections data have been used, and sensitivity studies have been performed to understand and quantify the uncertainties sources.The comparison of results shows that the difference on keff value between ERANOS code with ERALIB1 library and the reference is of the same order of magnitude than the delayed neutron fraction. On the other hand, the discrepancy is more than twice bigger if JEF2.2 library is used with ERANOS. Analysis of discrepancies in calculation results reveals that the main effect is provided by the difference of nuclear data, namely U238, Pu239 fission and capture cross sections and lead inelastic cross sections
Comparison of the dose evaluation methods for criticality accident
The improvement of the dose evaluation method for criticality accidents is important to rationalize design of the nuclear fuel cycle facilities. The source spectrums of neutron and gamma ray of a criticality accident depend on the condition of the source, its materials, moderation, density and so on. The comparison of the dose evaluation methods for a criticality accident is made. Some methods, which are combination of criticality calculation and shielding calculation, are proposed. Prompt neutron and gamma ray doses from nuclear criticality of some uranium systems have been evaluated as the Nuclear Criticality Slide Rule. The uranium metal source (unmoderated system) and the uranyl nitrate solution source (moderated system) in the rule are evaluated by some calculation methods, which are combinations of code and cross section library, as follows: (a) SAS1X (ENDF/B-IV), (b) MCNP4C (ENDF/B-VI)-ANISN (DLC23E or JSD120), (c) MCNP4C-MCNP4C (ENDF/B-VI). They have consisted of criticality calculation and shielding calculation. These calculation methods are compared about the tissue absorbed dose and the spectrums at 2 m from the source. (author)
Dose calculation of 6 MV Truebeam using Monte Carlo method
The purpose of this work is to simulate 6 MV Varian Truebeam linac dosimeter characteristics using Monte Carlo method and to investigate the availability of phase space file and the accuracy of the simulation. With the phase space file at linac window supplied by Varian to be a source, the patient-dependent part was simulated. Dose distributions in a water phantom with a 10 cm × 10 cm field were calculated and compared with measured data for validation. Evident time reduction was obtained from 4-5 h which a whole simulation cost on the same computer to around 48 minutes. Good agreement between simulations and measurements in water was observed. Dose differences are less than 3% for depth doses in build-up region and also for dose profiles inside the 80% field size, and the effect in penumbra is good. It demonstrate that the simulation using existing phase space file as the EGSnrc source is efficient. Dose differences between calculated data and measured data could meet the requirements for dose calculation. (authors)
Effect of radon measurement methods on dose estimation
Different radon measurement methods were applied in the old and new buildings of the Turkish bath of Eger (Hungary), in order to elaborate a radon measurement protocol. Besides, measurements were also made concerning the radon and thoron short-lived decay products, gamma dose from external sources and water radon. The most accurate results for dose estimation were provided by the application of personal radon meters. Estimated annual effective doses from radon and its short-lived decay products in the old and new buildings, using 0.2 and 0.1 measured equilibrium factors, were 0.83 and 0.17 mSv, respectively. The effective dose from thoron short-lived decay products was only 5 % of these values. The respective external gamma radiation effective doses were 0.19 and 0.12 mSvy-1. Effective dose from the consumption of tap water containing radon was 0.05 mSvy-1, while in the case of spring water, it was 0.14 mSvy-1. (authors)
Effect of radon measurement methods on dose estimation.
Kávási, Norbert; Kobayashi, Yosuke; Kovács, Tibor; Somlai, János; Jobbágy, Viktor; Nagy, Katalin; Deák, Eszter; Berhés, István; Bender, Tamás; Ishikawa, Tetsuo; Tokonami, Shinji; Vaupotic, Janja; Yoshinaga, Shinji; Yonehara, Hidenori
2011-05-01
Different radon measurement methods were applied in the old and new buildings of the Turkish bath of Eger, Hungary, in order to elaborate a radon measurement protocol. Besides, measurements were also made concerning the radon and thoron short-lived decay products, gamma dose from external sources and water radon. The most accurate results for dose estimation were provided by the application of personal radon meters. Estimated annual effective doses from radon and its short-lived decay products in the old and new buildings, using 0.2 and 0.1 measured equilibrium factors, were 0.83 and 0.17 mSv, respectively. The effective dose from thoron short-lived decay products was only 5 % of these values. The respective external gamma radiation effective doses were 0.19 and 0.12 mSv y(-1). Effective dose from the consumption of tap water containing radon was 0.05 mSv y(-1), while in the case of spring water, it was 0.14 mSv y(-1). PMID:21450699
Comparison of organ dosimetry methods and effective dose calculation methods for paediatric CT
Computed tomography (CT) is the single biggest ionising radiation risk from anthropogenic exposure. Reducing unnecessary carcinogenic risks from this source requires the determination of organ and tissue absorbed doses to estimate detrimental stochastic effects. In addition, effective dose can be used to assess comparative risk between exposure situations and facilitate dose reduction through optimisation. Children are at the highest risk from radiation induced carcinogenesis and therefore dosimetry for paediatric CT recipients is essential in addressing the ionising radiation health risks of CT scanning. However, there is no well-defined method in the clinical environment for routinely and reliably performing paediatric CT organ dosimetry and there are numerous methods utilised for estimating paediatric CT effective dose. Therefore, in this study, eleven computational methods for organ dosimetry and/or effective dose calculation were investigated and compared with absorbed doses measured using thermoluminescent dosemeters placed in a physical anthropomorphic phantom representing a 10 year old child. Three common clinical paediatric CT protocols including brain, chest and abdomen/pelvis examinations were evaluated. Overall, computed absorbed doses to organs and tissues fully and directly irradiated demonstrated better agreement (within approximately 50 %) with the measured absorbed doses than absorbed doses to distributed organs or to those located on the periphery of the scan volume, which showed up to a 15-fold dose variation. The disparities predominantly arose from differences in the phantoms used. While the ability to estimate CT dose is essential for risk assessment and radiation protection, identifying a simple, practical dosimetry method remains challenging.
McIntosh, Chris; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G
2016-01-01
Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present an atlas-based approach which learns a dose prediction model for each patient (atlas) in a training database, and then learns to match novel patients to the most relevant atlases. The method creates a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces any requirement for specifying dose-volume objectives for conveying the goals of treatment planning. A probabilistic dose distribution is inferred from the most relevant atlases, and is scalarized using a conditional random field to determine the most likely spatial distribution of dose to yield a specific dose prior (histogram) for relevant regions of interest. Voxel-based dose mimicking then converts the predicted dose distribution to a deliverable treatment plan dose distribution. In this study, we ...
Calculation method for gamma dose rates from Gaussian puffs
The Lagrangian puff models are widely used for calculation of the dispersion of releases to the atmosphere. Basic output from such models is concentration of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on the semi-infinite cloud model. This method is however only applicable for puffs with large dispersion parameters, i.e. for receptors far away from the release point. The exact calculation of the cloud dose using volume integral requires large computer time usually exceeding what is available for real time calculations. The volume integral for gamma doses could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor because only a few of the relevant parameters are considered. A multi-parameter method for calculation of gamma doses is described here. This method uses precalculated values of the gamma dose rates as a function of Eγ, σy, the asymmetry factor - σy/σz, the height of puff center - H and the distance from puff center Rxy. To accelerate the calculations the release energy, for each significant radionuclide in each energy group, has been calculated and tabulated. Based on the precalculated values and suitable interpolation procedure the calculation of gamma doses needs only short computing time and it is almost independent of the number of radionuclides considered. (au) 2 tabs., 15 ills., 12 refs
The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)
A simplified method to estimate gamma dose from atmospheric releases
Computation of gamma dose due to atmospheric releases is a tedious and time consuming process needing a large and fast computer. A simple approximate procedure is evolved which circumvents the need of a large body of precalculated data. Error analysis of the method is also presented. (author)
A systematic benchmark method for analysis and comparison of IMRT treatment planning algorithms
Tools and procedures for evaluating and comparing different intensity-modulated radiation therapy (IMRT) systems are presented. IMRT is increasingly in demand and there are numerous systems available commercially. These programs introduce significantly different software to dosimetrists and physicists than conventional planning systems, and the options often seem initially overwhelmingly complex to the user. By creating geometric target volumes and critical normal tissues, the characteristics of the algorithms may be investigated, and the influence of the different parameters explored. Overall optimization strategies of the algorithm may be characterized by treating a square target volume (TV) with 2 perpendicular beams, with and without heterogeneities. A half-donut (hemi-annulus) TV with a 'donut hole' (central cylinder) critical normal tissue (CNT) on a CT of a simulated quality assurance phantom is suggested as a good geometry to explore the IMRT algorithm parameters. Using this geometry, the order of varying parameters is suggested. First is to determine the effects of the number of stratifications of optimized intensity fluence on the resulting dose distribution, and selecting a fixed number of stratifications for further studies. To characterize the dose distributions, a dose-homogeneity index (DHI) is defined as the ratio of the dose received by 90% of the volume to the minimum dose received by the ''hottest'' 10% of the volume. The next step is to explore the effects of priority and penalty on both the TV and the CNT. Then, choosing and fixing these parameters, the effects of varying the number of beams can be looked at. As well as evaluating the dose distributions (and DHI), the number of subfields and the number of monitor units required for different numbers of stratifications and beams can be evaluated
Computerized simulation methods for dose reduction, in radiodiagnosis
The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)
Dosing method of physical activity in aerobics classes for students
Beliak Yu. I.; Zinchenko N.M.
2014-01-01
Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years). In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumb...
Řezáč, Jan; Huang, Yuanhang; Hobza, Pavel; Beran, Gregory J O
2015-07-14
Many-body noncovalent interactions are increasingly important in large and/or condensed-phase systems, but the current understanding of how well various models predict these interactions is limited. Here, benchmark complete-basis set coupled cluster singles, doubles, and perturbative triples (CCSD(T)) calculations have been performed to generate a new test set for three-body intermolecular interactions. This "3B-69" benchmark set includes three-body interaction energies for 69 total trimer structures, consisting of three structures from each of 23 different molecular crystals. By including structures that exhibit a variety of intermolecular interactions and packing arrangements, this set provides a stringent test for the ability of electronic structure methods to describe the correct physics involved in the interactions. Both MP2.5 (the average of second- and third-order Møller-Plesset perturbation theory) and spin-component-scaled CCSD for noncovalent interactions (SCS-MI-CCSD) perform well. MP2 handles the polarization aspects reasonably well, but it omits three-body dispersion. In contrast, many widely used density functionals corrected with three-body D3 dispersion correction perform comparatively poorly. The primary difficulty stems from the treatment of exchange and polarization in the functionals rather than from the dispersion correction, though the three-body dispersion may also be moderately underestimated by the D3 correction. PMID:26575743
Agrell, Per Joakim; Bogetoft, Peter
2013-01-01
Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publication...
Methods of determining the effective dose in dental radiology.
Thilander-Klang, Anne; Helmrot, Ebba
2010-01-01
A wide variety of X-ray equipment is used today in dental radiology, including intra-oral, orthopantomographic, cephalometric, cone-beam computed tomography (CBCT) and computed tomography (CT). This raises the question of how the radiation risks resulting from different kinds of examinations should be compared. The risk to the patient is usually expressed in terms of effective dose. However, it is difficult to determine its reliability, and it is difficult to make comparisons, especially when different modalities are used. The classification of the new CBCT units is also problematic as they are sometimes classified as CT units. This will lead to problems in choosing the best dosimetric method, especially when the examination geometry resembles more on an ordinary orthopantomographic examination, as the axis of rotation is not at the centre of the patient, and small radiation field sizes are used. The purpose of this study was to present different methods for the estimation of the effective dose from the equipment currently used in dental radiology, and to discuss their limitations. The methods are compared based on commonly used measurable and computable dose quantities, and their reliability in the estimation of the effective dose. PMID:20211918
Methods of determining the effective dose in dental radiology
A wide variety of X-ray equipment is used today in dental radiology, including intra-oral, ortho-pan-tomographic, cephalo-metric, cone-beam computed tomography (CBCT) and computed tomography (CT). This raises the question of how the radiation risks resulting from different kinds of examinations should be compared. The risk to the patient is usually expressed in terms of effective dose. However, it is difficult to determine its reliability, and it is difficult to make comparisons, especially when different modalities are used. The classification of the new CBCT units is also problematic as they are sometimes classified as CT units. This will lead to problems in choosing the best dosimetric method, especially when the examination geometry resembles more on an ordinary ortho-pan-tomographic examination, as the axis of rotation is not at the centre of the patient, and small radiation field sizes are used. The purpose of this study was to present different methods for the estimation of the effective dose from the equipment currently used in dental radiology, and to discuss their limitations. The methods are compared based on commonly used measurable and computable dose quantities, and their reliability in the estimation of the effective dose. (authors)
Banfield, J. E. [Dept. of Nuclear Engineering, Univ. of Tennessee, Knoxville, TN 37996-2300 (United States); Clarno, K. T.; Hamilton, S. P. [Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Maldonado, G. I. [Dept. of Nuclear Engineering, Univ. of Tennessee, Knoxville, TN 37996-2300 (United States); Philip, B.; Baird, M. L. [Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)
2012-07-01
The key physics involved in accurate prediction of reactor-fuel-element behavior includes neutron transport and thermal hydraulics. The thermal hydraulic feedback mechanism is primarily provided through cross sections to the neutron transport that are temperature and density dependent. Historically, this coupling was primarily seen only in reactor simulators, which are well suited to model the reactor core, giving only a coarse treatment to individual fuel pins as well as simple models for thermal distribution calculations. This poor resolution on the primary coupling mechanisms can lead to conservatisms that should be removed to improve fuel design and performance. This work seeks to address the resolution of space-time-dependent neutron kinetics with thermal feedback within the fuel pin scale in the multi-physics framework. The specific application of this new capability is transient performance analysis of space-time-dependent temperature distribution of fuel elements. The coupling between the neutron transport and the thermal feedback is extremely important in this highly coupled problem, primarily applicable to reactivity-initiated- accidents (RIAs) and loss-of-coolant-accidents (LOCAs). The capability developed will include the coupling of the time-dependent neutron transport with the time-dependent thermal diffusion capability. An improvement in resolution and coupling is proposed by developing neutron transport models that are internally coupled with high fidelity within fuel pin thermal calculations in a multi-physics framework. Good agreement is shown with benchmarks and problems from the literature of RIAs and LOCAs for the tools used. (authors)
The key physics involved in accurate prediction of reactor-fuel-element behavior includes neutron transport and thermal hydraulics. The thermal hydraulic feedback mechanism is primarily provided through cross sections to the neutron transport that are temperature and density dependent. Historically, this coupling was primarily seen only in reactor simulators, which are well suited to model the reactor core, giving only a coarse treatment to individual fuel pins as well as simple models for thermal distribution calculations. This poor resolution on the primary coupling mechanisms can lead to conservatisms that should be removed to improve fuel design and performance. This work seeks to address the resolution of space-time-dependent neutron kinetics with thermal feedback within the fuel pin scale in the multiphysics framework. The specific application of this new capability is transient performance analysis of space-time-dependent temperature distribution of fuel elements. The coupling between the neutron transport and the thermal feedback is extremely important in this highly coupled problem, primarily applicable to reactivity-initiated-accidents (RIAs) and loss-of-coolant-accidents (LOCAs). The capability developed will include the coupling of the time-dependent neutron transport with the time-dependent thermal diffusion capability. An improvement in resolution and coupling is proposed by developing neutron transport models that are internally coupled with high fidelity within fuel pin thermal calculations in a multiphysics framework. Good agreement is shown with benchmarks and problems from the literature of RIAs and LOCAs for the tools used.
The key physics involved in accurate prediction of reactor-fuel-element behavior includes neutron transport and thermal hydraulics. The thermal hydraulic feedback mechanism is primarily provided through cross sections to the neutron transport that are temperature and density dependent. Historically, this coupling was primarily seen only in reactor simulators, which are well suited to model the reactor core, giving only a coarse treatment to individual fuel pins as well as simple models for thermal distribution calculations. This poor resolution on the primary coupling mechanisms can lead to conservatisms that should be removed to improve fuel design and performance. This work seeks to address the resolution of space-time-dependent neutron kinetics with thermal feedback within the fuel pin scale in the multi-physics framework. The specific application of this new capability is transient performance analysis of space-time-dependent temperature distribution of fuel elements. The coupling between the neutron transport and the thermal feedback is extremely important in this highly coupled problem, primarily applicable to reactivity-initiated- accidents (RIAs) and loss-of-coolant-accidents (LOCAs). The capability developed will include the coupling of the time-dependent neutron transport with the time-dependent thermal diffusion capability. An improvement in resolution and coupling is proposed by developing neutron transport models that are internally coupled with high fidelity within fuel pin thermal calculations in a multi-physics framework. Good agreement is shown with benchmarks and problems from the literature of RIAs and LOCAs for the tools used. (authors)
Comparison of dose calculation methods for brachytherapy of intraocular tumors
Rivard, Mark J.; Chiu-Tsao, Sou-Tung; Finger, Paul T.; Meigooni, Ali S.; Melhus, Christopher S.; Mourtada, Firas; Napolitano, Mary E.; Rogers, D. W. O.; Thomson, Rowan M.; Nath, Ravinder [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Quality MediPhys LLC, Denville, New Jersey 07834 (United States); New York Eye Cancer Center, New York, New York 10065 (United States); Department of Radiation Oncology, Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States); Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Department of Radiation Physics, University of Texas, M.D. Anderson Cancer Center, Houston, Texas 77030 (United States) and Department of Experimental Diagnostic Imaging, University of Texas, M.D. Anderson Cancer Center, Houston, Texas 77030 (United States); Physics, Elekta Inc., Norcross, Georgia 30092 (United States); Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 (Canada); Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut 06520 (United States)
2011-01-15
Purpose: To investigate dosimetric differences among several clinical treatment planning systems (TPS) and Monte Carlo (MC) codes for brachytherapy of intraocular tumors using {sup 125}I or {sup 103}Pd plaques, and to evaluate the impact on the prescription dose of the adoption of MC codes and certain versions of a TPS (Plaque Simulator with optional modules). Methods: Three clinical brachytherapy TPS capable of intraocular brachytherapy treatment planning and two MC codes were compared. The TPS investigated were Pinnacle v8.0dp1, BrachyVision v8.1, and Plaque Simulator v5.3.9, all of which use the AAPM TG-43 formalism in water. The Plaque Simulator software can also handle some correction factors from MC simulations. The MC codes used are MCNP5 v1.40 and BrachyDose/EGSnrc. Using these TPS and MC codes, three types of calculations were performed: homogeneous medium with point sources (for the TPS only, using the 1D TG-43 dose calculation formalism); homogeneous medium with line sources (TPS with 2D TG-43 dose calculation formalism and MC codes); and plaque heterogeneity-corrected line sources (Plaque Simulator with modified 2D TG-43 dose calculation formalism and MC codes). Comparisons were made of doses calculated at points-of-interest on the plaque central-axis and at off-axis points of clinical interest within a standardized model of the right eye. Results: For the homogeneous water medium case, agreement was within {approx}2% for the point- and line-source models when comparing between TPS and between TPS and MC codes, respectively. For the heterogeneous medium case, dose differences (as calculated using the MC codes and Plaque Simulator) differ by up to 37% on the central-axis in comparison to the homogeneous water calculations. A prescription dose of 85 Gy at 5 mm depth based on calculations in a homogeneous medium delivers 76 Gy and 67 Gy for specific {sup 125}I and {sup 103}Pd sources, respectively, when accounting for COMS-plaque heterogeneities. For off
Shutdown dose rate assessment with the Advanced D1S method: Development, applications and validation
Villari, R., E-mail: rosaria.villari@enea.it [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Fischer, U. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Moro, F. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Pereslavtsev, P. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Petrizzi, L. [European Commission, DG Research and Innovation K5, CDMA 00/030, B-1049 Brussels (Belgium); Podda, S. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Serikov, A. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)
2014-10-15
Highlights: Development of Advanced-D1S for shutdown dose rate calculations; Recent applications of the tool to tokamaks; Summary of the results of benchmarking with measurements and R2S calculations; Limitations and further development. Abstract: The present paper addresses the recent developments and applications of Advanced-D1S to the calculations of shutdown dose rate in tokamak devices. Results of benchmarking with measurements and Rigorous 2-Step (R2S) calculations are summarized and discussed as well as limitations and further developments. The outcomes confirm the essential role of the Advanced-D1S methodology and the evidence for its complementary use with the R2Smesh approach for the reliable assessment of shutdown dose rates and related statistical uncertainties in present and future fusion devices.
Shutdown dose rate assessment with the Advanced D1S method: Development, applications and validation
Highlights: •Development of Advanced-D1S for shutdown dose rate calculations. •Recent applications of the tool to tokamaks. •Summary of the results of benchmarking with measurements and R2S calculations. •Limitations and further development. -- Abstract: The present paper addresses the recent developments and applications of Advanced-D1S to the calculations of shutdown dose rate in tokamak devices. Results of benchmarking with measurements and Rigorous 2-Step (R2S) calculations are summarized and discussed as well as limitations and further developments. The outcomes confirm the essential role of the Advanced-D1S methodology and the evidence for its complementary use with the R2Smesh approach for the reliable assessment of shutdown dose rates and related statistical uncertainties in present and future fusion devices
Suwazono, Yasushi, E-mail: suwa@faculty.chiba-u.jp [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nogawa, Kazuhiro; Uetani, Mirei [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nakada, Satoru [Safety and Health Organization, Chiba University, 1-33 Yayoicho, Inageku, Chiba 263-8522 (Japan); Kido, Teruhiko [Department of Community Health Nursing, Kanazawa University School of Health Sciences, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan); Nakagawa, Hideaki [Department of Epidemiology and Public Health, Kanazawa Medical University, 1-1 Daigaku, Uchnada, Ishikawa 920-0293 (Japan)
2011-02-15
Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and {beta}2-microglobulin ({beta}2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for {beta}2-MG was 3.5 {mu}g/g creatinine in men and 3.7 {mu}g/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.
Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and β2-microglobulin (β2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for β2-MG was 3.5 μg/g creatinine in men and 3.7 μg/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.
We present the TORT solutions to the 3D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40 x 40 x 40, 200 angles) to the finest model (160 x 160 x 160, 800 angles). The MCNP reference solution is used for evaluating the effect of model-refinement on the accuracy of the TORT solutions. The presented results show that the majority of benchmark quantities are computed with good accuracy by TORT, and that the accuracy improves with model refinement. However, this deliberately severe test has exposed some deficiencies in both deterministic and stochastic solution approaches. Specifically, TORT fails to converge the inner iterations in some benchmark configurations while MCNP produces zero tallies, or drastically poor statistics for some benchmark quantities. We conjecture that TORT's failure to converge is driven by ray effects in configurations with low scattering ratio and/or highly skewed computational cells, i.e. aspect ratio far from unity. The failure of MCNP occurs in quantities tallied over a very small area or volume in physical space, or quantities tallied many (∼25) mean free paths away from the source. Hence automated, robust, and reliable variance reduction techniques are essential for obtaining high quality reference values of the benchmark quantities. Preliminary results of the benchmark exercise indicate that the occasionally poor performance of TORT is shared with other deterministic codes. Armed with this information, method developers can now direct their attention to regions in parameter space where such failures occur and design alternative solution approaches for such instances
Quality assurance methods in the German dose rate measurement network
The result of the gamma dose rate measurements in the context of the surveillance of environmental radioactivity depends strongly on the physical properties of the counting probes, but also on meteorological effects and the characteristics of the site in the vicinity of the dose rate probes. In the German gamma dose rate measurement network (ODL-monitoring net) substantial quality assurance efforts have been undertaken to ensure that the measured data are representative. These are, in particular, measures to determine the specific physical properties of the deployed monitors (background count rate, dependence on the cosmic radiation), an individual on-site test of the detector efficiency in the context of so-called ''method of repetitive tests'', methods for on-line correlation of precipitation and dose rate display and description of the monitor environment in context of the background capability as well as the validity in case of a contamination situation. Respective investigations have been performed in recent years and, based on the substantial amount of data, an optimisation of the practicability of the measurement data for decision support systems derived. (orig.)
Small-sample reactivity experiments are relevant to provide accurate information on the integral cross sections of materials. One of the specificities of these experiments is that the measured reactivity worth generally ranges between 1 and 10 pcm, which precludes the use of Monte Carlo for the analysis. As a consequence, several papers have been devoted to deterministic calculation routes, implying spatial and/or energetic discretization which could involve calculation bias. Within the Expert Group on Burn-Up Credit of the OECD/NEA, a benchmark was proposed to compare different calculation codes and methods for the analysis of these experiments. In four Sub-Phases with geometries ranging from a single cell to a full 3D core model, participants were asked to evaluate the reactivity worth due to the addition of small quantities of separated fission products and actinides into a UO2 fuel. Fourteen institutes using six different codes have participated in the Benchmark. For reactivity worth of more than a few tens of pcm, the Monte-Carlo approach based on the eigen-value difference method appears clearly as the reference method. However, in the case of reactivity worth as low as 1 pcm, it is concluded that the deterministic approach based on the exact perturbation formalism is more accurate and should be preferred. Promising results have also been reported using the newly available exact perturbation capability, developed in the Monte Carlo code TRIPOLI4, based on the calculation of a continuous energy adjoint flux in the reference situation, convoluted to the forward flux of the perturbed situation. (author)
Comparison between calculation methods of dose rates in gynecologic brachytherapy
In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)
Method of simulation of low dose rate for total dose effect in 0.18 {mu}m CMOS technology
He Baoping; Yao Zhibin; Guo Hongxia; Luo Yinhong; Zhang Fengqi; Wang Yuanming; Zhang Keying, E-mail: baopinghe@126.co [Northwest Institute of Nuclear Technology, Xi' an 710613 (China)
2009-07-15
Three methods for simulating low dose rate irradiation are presented and experimentally verified by using 0.18 {mu}m CMOS transistors. The results show that it is the best way to use a series of high dose rate irradiations, with 100 {sup 0}C annealing steps in-between irradiation steps, to simulate a continuous low dose rate irradiation. This approach can reduce the low dose rate testing time by as much as a factor of 45 with respect to the actual 0.5 rad (Si)/s dose rate irradiation. The procedure also provides detailed information on the behavior of the test devices in a low dose rate environment.
Method of simulation of low dose rate for total dose effect in 0.18 μm CMOS technology
Three methods for simulating low dose rate irradiation are presented and experimentally verified by using 0.18 μm CMOS transistors. The results show that it is the best way to use a series of high dose rate irradiations, with 100 0C annealing steps in-between irradiation steps, to simulate a continuous low dose rate irradiation. This approach can reduce the low dose rate testing time by as much as a factor of 45 with respect to the actual 0.5 rad (Si)/s dose rate irradiation. The procedure also provides detailed information on the behavior of the test devices in a low dose rate environment.
XIAO Hai; LI Jun
2008-01-01
Benchmark calculations on the molar atomization enthalpy, geometry, and vibrational frequencies of uranium hexafluoride (UF6) have been performed by using relativistic density functional theory (DFT) with various levels of relativistic effects, different types of basis sets, and exchange-correlation functionals. Scalar relativistic effects are shown to be critical for the structural properties. The spin-orbit coupling effects are important for the calculated energies, but are much less important for other calculated ground-state properties of closed-shell UF6. We conclude through systematic investigations that ZORA- and RECP-based relativistic DFT methods are both appropriate for incorporating relativistic effects. Comparisons of different types of basis sets (Slater, Gaussian, and plane-wave types) and various levels of theoretical approximation of the exchange-correlation functionals were also made.
Iterative methods for dose reduction and image enhancement in tomography
Miao, Jianwei; Fahimian, Benjamin Pooya
2012-09-18
A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.
The purpose of this study was to validate a novel approach of applying a partial volume correction factor (PVCF) using a limited number of MOSFET detectors in the effective dose (E) calculation. The results of the proposed PVCF method were compared to the results from both the point dose (PD) method and a commercial CT dose estimation software (CT-Expo). To measure organ doses, an adult female anthropomorphic phantom was loaded with 20 MOSFET detectors and was scanned using the non-contrast and 2 phase contrast-enhanced parathyroid imaging protocols on a 64-slice multi-detector computed tomography scanner. E was computed by three methods: the PD method, the PVCF method, and the CT-Expo method. The E (in mSv) for the PD method, the PVCF method, and CT-Expo method was 2.6 ± 0.2, 1.3 ± 0.1, and 1.1 for the non-contrast scan, 21.9 ± 0.4, 13.9 ± 0.2, and 14.6 for the 1st phase of the contrast-enhanced scan, and 15.5 ± 0.3, 9.8 ± 0.1, and 10.4 for the 2nd phase of the contrast-enhanced scan, respectively. The E with the PD method differed from the PVCF method by 66.7% for the non-contrast scan, by 44.9% and by 45.5% respectively for the 1st and 2nd phases of the contrast-enhanced scan. The E with PVCF was comparable to the results from the CT-Expo method with percent differences of 15.8%, 5.0%, and 6.3% for the non-contrast scan and the 1st and 2nd phases of the contrast-enhanced scan, respectively. To conclude, the PVCF method estimated E within 16% difference as compared to 50–70% in the PD method. In addition, the results demonstrate that E can be estimated accurately from a limited number of detectors. (paper)
Reliable B cell epitope predictions: impacts of method development and improved benchmarking
Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole;
2012-01-01
biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping...
Lutsker, V; Aradi, B; Niehaus, T A
2015-11-14
Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply the method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data. PMID:26567646
Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply the method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data
The motivation to conduct this benchmark exercise, a summary of the results, and a discussion of and conclusions from the intercomparison are given in Section 5.2. This section contains further details of the results of the calculations and intercomparisons, illustrated by tables and figures, but avoiding repetition of Section 5.2 as far as possible. (author)
Clausen, Philip T. L. C.; Zankari, Ea; Aarestrup, Frank Møller;
2016-01-01
two different methods in current use for identification of antibiotic resistance genes in bacterial WGS data. A novel method, KmerResistance, which examines the co-occurrence of k-mers between the WGS data and a database of resistance genes, was developed. The performance of this method was compared...... with two previously described methods; ResFinder and SRST2, which use an assembly/BLAST method and BWA, respectively, using two datasets with a total of 339 isolates, covering five species, originating from the Oxford University Hospitals NHS Trust and Danish pig farms. The predicted resistance was...... compared with the observed phenotypes for all isolates. To challenge further the sensitivity of the in silico methods, the datasets were also down-sampled to 1% of the reads and reanalysed. The best results were obtained by identification of resistance genes by mapping directly against the raw reads. This...
Cancer patients with implanted cardiac pacemaker occasionally require radiotherapy. Pacemaker may be damaged or malfunction during radiotherapy due to ionizing radiation or electromagnetic interference. Although radiotherapy should be planned to keep the dose to pacemaker as low as possible not to malfunction ideally, current radiation treatment planning (RTP) system does not accurately calculate deposited dose to adjacent field border or area beyond irradiated fields. In terms of beam delivery techniques using multiple intensity modulated fields, dosimetric effect of scattered radiation in high energy photon beams is required to be detailed analyzed based on measurement data. The aim of this study is to evaluate dose discrepancies of pacemaker in a RTP system as compared to measured doses. We also designed dose reduction strategy limited value of 2 Gy for radiation treatment patients with cardiac implanted pacemaker. Total accumulated dose of 145 cGy based on in-vivo dosimetry was satisfied with the recommendation criteria to prevent malfunction of pacemaker in SS technique. However, the 2 mm lead shielder enabled the scattered doses to reduce up to 60% and 40% in the patient and the phantom, respectively. The SS technique with the lead shielding could reduce the accumulated scattered doses less than 100 cGy. Calculated and measured doses were not greatly affected by the beam delivery techniques. In-vivo and measured doses on pacemaker position showed critical dose discrepancies reaching up to 4 times as compared to planned doses in RTP. The current SS technique could deliver lower scattered doses than recommendation criteria, but use of 2 mm lead shielder contributed to reduce scattered doses by 60%. The tertiary lead shielder can be useful to prevent malfunction or electrical damage of implanted pacemakers during radiotherapy. It is required to estimate more accurate scattered doses of the patient or medical device in RTP to design proper dose reduction strategy.
The Monte Carlo (MC)-discrete ordinates (SN) coupled method is an efficient approach to solve shielding calculations of nuclear device with complex geometries and deep penetration. The 3D MC-SN coupled method has been used for PWR shielding calculation for the first time. According to characteristics of NUREG/CR-6115 PWR model, the thermal shield is specified as the common surface to link the Monte Carlo complex geometrical model and the deep penetration SN model. 3D Monte Carlo code is employed to accurately simulate the structure from core to thermal shield. The neutron tracks crossing the thermal shield inner surface are recorded by MC code. The SN boundary source is generated by the interface program and used by the 3D SN code to treat the calculation from thermal shield to pressure vessel. The calculation results include the circular distributions of fast neutron flux at pressure vessel inner wall, pressure vessel T/4 and lower weld locations. The calculation results are performed with comparison to MCNP and DORT solutions of benchmark report and satisfactory agreements are obtained. The validity of the method and the correctness of the programs are proved. (authors)
Benchmarking the invariant embedding method against analytical solutions in model transport problems
Wahlberg Malin; Pázsit Imre
2006-01-01
The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade) is generated by recoil production. Both constant and energy dependent cross-sections ...
Kolman, Radek; Cho, S.S.; Park, K.C.
Atheny : National Technical University of Athens, 2015 - (Papadrakakis, M.; Papadopoulos, V.). C 620 ISBN 978-960-99994-7-2. [International Conference on Computational Method s in Structural Dynamics and Earthquake Engineering /5./. 25.05.2015-27.05.2015, Crete] R&D Projects: GA ČR(CZ) GAP101/12/2315; GA TA ČR(CZ) TH01010772 Institutional support: RVO:61388998 Keywords : wave propagation * spurious oscillations * finite element method Subject RIV: BI - Acoustics
Computer–based method of bite mark analysis: A benchmark in forensic dentistry?
Pallam, Nandita Kottieth; Boaz, Karen; Natrajan, Srikant; Raj, Minu; Manaktala, Nidhi; Lewis, Amitha J.
2016-01-01
Aim: The study aimed to determine the technique with maximum accuracy in production of bite mark overlay. Materials and Methods: Thirty subjects (10 males and 20 females; all aged 20–30 years) with complete set of natural upper and lower anterior teeth were selected for this study after obtaining approval from the Institutional Ethical Committee. The upper and lower alginate impressions were taken and die stone models were obtained from each impression; overlays were produced from the biting surfaces of six upper and six lower anterior teeth by hand tracing from study casts, hand tracing from wax impressions of the bite surface, radiopaque wax impression method, and xerographic method. These were compared with the original overlay produced digitally. Results: Xerographic method was the most accurate of the four techniques, with the highest reproducibility for bite mark analysis. The methods of wax impression were better for producing overlay of tooth away from the occlusal plane. Conclusions: Various techniques are used in bite mark analysis and the choice of technique depends largely on personal preference. No single technique has been shown to be better than the others and very little research has been carried out to compare different methods. This study evaluated the accuracy of direct comparisons between suspect's models and bite marks with indirect comparisons in the form of conventional traced overlays of suspects and found the xerographic technique to be the best. PMID:27051221
Model Averaging Software for Dichotomous Dose Response Risk Estimation
Matthew W. Wheeler
2008-02-01
Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, ﬁts the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulﬁlls a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.
Dosing method of physical activity in aerobics classes for students
Beliak Yu.I.
2014-10-01
Full Text Available Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years. In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumbbells weighing 1kg increase in the rate of musical accompaniment, varying heights step platform. Results . on the basis of the relationship between heart rate and oxygen consumption was determined by the energy cost of each admission control load intensity. This indicator has been used to justify the intensity and duration of multiplicity aerobics. Figure correspond to the level of physical condition and motor activity deficits students. Conclusions : the estimated component of this method of dosing load makes it convenient for use in automated computer programs. Also it can be easily modified to dispense load other types of recreational fitness.
Over the past several years, plant-life extension programs have been implemented at many U.S. plants. One method of pressure vessel (PV) fluence rate reduction being used in several of the older reactors involves partial replacement of the oxide fuel with metallic rods in those peripheral assemblies located at critical azimuths. This substitution extends axially over a region that depends on the individual plant design, but covers the most critical PV weld and plate locations, which may be subject to pressurized thermal shock. In order to analyze the resulting PV dosimetry using these partial-length shield assemblies (PLSA), a relatively simple but accurate method needs to be formulated and qualified that treats the axially asymmetric core leakage. Accordingly, an experiment was devised and performed at the VENUS critical facility in Mol, Belgium. The success of the proposed method bodes well for the accuracy of future analyses of on-line plants using PLSAs
CCSDTQ interaction energies as a benchmark for CCSDT-level methods
Hobza, Pavel; Řezáč, Jan; Šimová, Lucia
New Orleans: American Chemical Society, 2013. 31PHYS. ISSN 0065-7727. [National Spring Meeting of the American Chemical Society /245./. 07.04.2013-11.04.2013, New Orleans] Institutional support: RVO:61388963 Keywords : CCSDTQ * interaction energies * CCSDT- level methods Subject RIV: CF - Physical ; Theoretical Chemistry
This paper describes the philosophy, the objectives and the lesson leaned from the Reliability Benchmark Exercises (RBE), organized by the Joint Research Center (JRC) Ispra of the Commission of the European Communities, and carried out over some years within a worldwide community of users and developers of Probabilistic Safety Assessment (PSA) methods and applications. The causes of uncertainties and the importance of the modelling uncertainties, revealed by the exercises, lead to a variety of observations also on the use of reliability methods for the definition of the technical specifications, including the limiting conditions for operation, the requirements of surveillance testing, the safety system set point limits and the administrative controls. In particular, it is argued that the use of PSA techniques, as source of information for a safe operability of the plant, requires validated system models which might be better achieved by means of computerised analysis tools. These are helpful both in the design phase and during operations, when the operator or the surveyor has to define, case by case, the boundary conditions for the case at hand. In this sense, the study and the development of computerised analysis tools is being developed within the JRC Ispra with the objective of ameliorating and exploiting further the application of appropriate reliability analyses of plants. The results so far obtained are presented and finally the perspectives of this work are discussed in terms of advantages, needs and characteristics of the information system for the optimization of the plant management and control
Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput
Hayes, R.B.; Haskell, E.H.; Kenner, G.H. [Utah Univ., Salt Lake City, UT (United States)
1996-01-01
Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.
Complex absorbing potentials within EOM-CC family of methods: Theory, implementation, and benchmarks
A production-level implementation of equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) for electron attachment and excitation energies augmented by a complex absorbing potential (CAP) is presented. The new method enables the treatment of metastable states within the EOM-CC formalism in a similar manner as bound states. The numeric performance of the method and the sensitivity of resonance positions and lifetimes to the CAP parameters and the choice of one-electron basis set are investigated. A protocol for studying molecular shape resonances based on the use of standard basis sets and a universal criterion for choosing the CAP parameters are presented. Our results for a variety of π* shape resonances of small to medium-size molecules demonstrate that CAP-augmented EOM-CCSD is competitive relative to other theoretical approaches for the treatment of resonances and is often able to reproduce experimental results
Computer–based method of bite mark analysis: A benchmark in forensic dentistry?
Nandita Kottieth Pallam; Karen Boaz; Srikant Natrajan; Minu Raj; Nidhi Manaktala; Lewis, Amitha J
2016-01-01
Aim: The study aimed to determine the technique with maximum accuracy in production of bite mark overlay. Materials and Methods: Thirty subjects (10 males and 20 females; all aged 20–30 years) with complete set of natural upper and lower anterior teeth were selected for this study after obtaining approval from the Institutional Ethical Committee. The upper and lower alginate impressions were taken and die stone models were obtained from each impression; overlays were produced from the biting ...
Kolman, Radek; Cho, S.S.; Červ, Jan; Park, K.C.
Praha : Institute of Thermomechanics AS CR, 2014 - (Zolotarev, I.; Pešek, L.), s. 31-36 ISBN 978-80-87012-54-3. [DYMAMESI 2014. Praha (CZ), 25.11.2014-26.11.2014] R&D Projects: GA ČR(CZ) GAP101/11/0288 Institutional support: RVO:61388998 Keywords : stress wave propagation * finite element method * explicit time integrator * spurious oscillations * stress discontinuities Subject RIV: JR - Other Machinery
Benchmarking the invariant embedding method against analytical solutions in model transport problems
The purpose of this paper is to demonstrate the use of the invariant embedding method in a series of model transport problems, for which it is also possible to obtain an analytical solution. Due to the non-linear character of the embedding equations, their solution can only be obtained numerically. However, this can be done via a robust and effective iteration scheme. In return, the domain of applicability is far wider than the model problems investigated in this paper. The use of the invariant embedding method is demonstrated in three different areas. The first is the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production. Both constant and energy dependent cross sections with a power law dependence were used in the calculations. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel and unexpected application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and a half-space are interrelated through embedding-like integral equations, by the solution of which the reflected flux from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases the invariant embedding method proved to be robust, fast and monotonically converging to the exact solutions. (authors)
Benchmarking the invariant embedding method against analytical solutions in model transport problems
Wahlberg Malin
2006-01-01
Full Text Available The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade is generated by recoil production. Both constant and energy dependent cross-sections with a power law dependence were treated. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and in a half-space are interrelated through embedding-like integral equations, by the solution of which the flux reflected from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases, the invariant embedding method proved to be robust, fast, and monotonically converging to the exact solutions.
In order to evaluate criticality accident analysis codes, a criticality accident benchmark problem was made based on the TRACY experiment. It is evaluated by the contributors of the expert group on criticality excursion analysis, a group of criticality safety WP of OECD/NEA/NSC. This paper reports the detail of TRACY Benchmark I and II, and preliminary results of its analysis using AGNES code. (author)
Electricity distribution is a natural local monopoly. In many countries, the regulators of this sector apply frontier methods such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) to estimate the efficient cost of operation. In Finland, a new StoNED method was adopted in 2012. This paper compares DEA, SFA and StoNED in the context of regulating electricity distribution. Using data from Finland, we compare the impacts of methodological choices on cost efficiency estimates and acceptable cost. While the efficiency estimates are highly correlated, the cost targets reveal major differences. In addition, we examine performance of the methods by Monte Carlo simulations. We calibrate the data generation process (DGP) to closely match the empirical data and the model specification of the regulator. We find that the StoNED estimator yields a root mean squared error (RMSE) of 4% with the sample size 100. Precision improves as the sample size increases. The DEA estimator yields an RMSE of approximately 10%, but performance deteriorates as the sample size increases. The SFA estimator has an RMSE of 144%. The poor performance of SFA is due to the wrong functional form and multicollinearity. - Highlights: • We compare DEA, SFA and StoNED methods in the context of regulation of electricity distribution. • Both empirical comparisons and Monte Carlo simulations are presented. • Choice of benchmarking method has a significant economic impact on the regulatory outcomes. • StoNED yields the most precise results in the Monte Carlo simulations. • Five lessons concerning heterogeneity, noise, frontier, simulations, and implementation
This report is a compilation of the information submitted by AECL, CIAE, JAERI, ORNL and Siemens in response to a need identified at the 'Workshop on R and D Needs' at the IGORR-3 meeting. The survey compiled information on the national standards applied to the Safety Quality Assurance (SQA) programs undertaken by the participants. Information was assembled for the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods used to verify and validate the codes and libraries. Although the survey was not comprehensive, it provides a basis for exchanging information of common interest to the research reactor community
Knight, Joseph W; Wang, Xiaopeng; Gallandi, Lukas; Dolgounitcheva, Olga; Ren, Xinguo; Ortiz, J Vincent; Rinke, Patrick; Körzdörfer, Thomas; Marom, Noa
2016-02-01
The performance of different GW methods is assessed for a set of 24 organic acceptors. Errors are evaluated with respect to coupled cluster singles, doubles, and perturbative triples [CCSD(T)] reference data for the vertical ionization potentials (IPs) and electron affinities (EAs), extrapolated to the complete basis set limit. Additional comparisons are made to experimental data, where available. We consider fully self-consistent GW (scGW), partial self-consistency in the Green's function (scGW0), non-self-consistent G0W0 based on several mean-field starting points, and a "beyond GW" second-order screened exchange (SOSEX) correction to G0W0. We also describe the implementation of the self-consistent Coulomb hole with screened exchange method (COHSEX), which serves as one of the mean-field starting points. The best performers overall are G0W0+SOSEX and G0W0 based on an IP-tuned long-range corrected hybrid functional with the former being more accurate for EAs and the latter for IPs. Both provide a balanced treatment of localized vs delocalized states and valence spectra in good agreement with photoemission spectroscopy (PES) experiments. PMID:26731609
Statistical methods used for code-to-code comparisons in the OECD/NRC PWR MSLB benchmark
The ongoing pressurized water reactor (PWR) main steam line break (MSLB) benchmark problem, sponsored by the Office for Economic Cooperation and Development (OECD), the United States Nuclear Regulatory Commission (US NRC), and the Pennsylvania State University (PSU) consists of three exercises, whose combined purpose is to verify the capability of system codes to analyze complex transients with coupled core/plant interactions; to test fully the 3D neutronics/thermal-hydraulic coupling; and to evaluate discrepancies between the predictions of coupled codes in best-estimate transient simulations. Exercise two is intended to test core response to imposed system thermal-hydraulic conditions. For this exercise, the participants are provided with transient boundary conditions and two cross-section libraries. Results are submitted for six steady-state cases and two transient scenarios. The boundary conditions, the details for each case, and the output requested are described in the final specifications for the benchmark problem. To fully analyze the data for comparison in the final report, a suite of statistical methods has been developed, to serve as a reference in the absence of experimental data. A corrected arithmetical mean and standard deviation are calculated for all data types: single-value parameters, 1D axial distributions, 2D radial distributions, and time histories. Each participant's deviation from the mean and a corresponding figure-of-merit are reported for the purposes of comparison and discussion. Selected mean values and standard deviations are presented in this paper for several parameters at specific points of interest: for the initial steady-state 2, at hot-full power, radial and axial power distributions are presented, along with effective multiplication factor, power peaking factors, and axial offset. For the snapshot taken at the time of highest return-to-power in transient Scenario 2, parameters presented include axial and radial power
Benchmark experiments for nuclear data
Benchmark experiments offer the most direct method for validation of nuclear data. Benchmark experiments for several areas of application of nuclear data were specified by CSEWG. These experiments are surveyed and tests of recent versions of ENDF/B are presented. (U.S.)
Claudio Ferone
2014-08-01
Full Text Available Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM. Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration various aspects: thermal properties at high temperature, the actual geometry of the repeated storage element and the actual storage cycle adopted. High-performance thermal storage materials from the literatures have been tested and used here as reference benchmarks. Other materials tested are lightweight concretes with recycled aggregates and a geopolymer concrete. Their thermal properties have been measured and used as inputs in the numerical model to preliminarily evaluate their application in thermal storage. The analysis carried out can also be used to optimize the storage system, in terms of thermal properties required to the storage material. The results showed a significant influence of the thermal properties on the performances of the storage elements. Simulation results have provided information for further scale-up from a single differential storage element to the entire module as a function of material thermal properties.
The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.
Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics
2015-05-01
The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.
Baldacci, F; Mittone, A; Bravin, A; Coan, P; Delaire, F; Ferrero, C; Gasilov, S; Létang, J M; Sarrut, D; Smekens, F; Freud, N
2015-03-01
The track length estimator (TLE) method, an "on-the-fly" fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 10(3), with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams. PMID:24973309
The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm-1) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques
Absorbed dose determination in photon fields using the tandem method
The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF2: Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with 90Sr-90Y, calibrated with the energy of 60Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than 5%. The reason of the answers of the CaF2: Dy and LiF: Mg, Ti, according to the energy of the radiation, allows us to establish the effective energy of photons and the absorbed dose, with a margin of error of less than 10% and 20% respectively
Selecting benchmarks for reactor calculations
Criticality, reactor physics, fusion and shielding benchmarks are expected to play important roles in GENIV design, safety analysis and in the validation of analytical tools used to design these reactors. For existing reactor technology, benchmarks are used to validate computer codes and test nuclear data libraries. However the selection of these benchmarks are usually done by visual inspection which is dependent on the expertise and the experience of the user and thereby resulting in a user bias in the process. In this paper we present a method for the selection of these benchmarks for reactor applications and uncertainty reduction based on Total Monte Carlo (TMC) method. Similarities between an application case and one or several benchmarks are quantified using the correlation coefficient. Based on the method, we also propose two approaches for reducing nuclear data uncertainty using integral benchmark experiments as an additional constrain in the TMC method: a binary accept/reject method and a method of uncertainty reduction using weights. Finally, the methods were applied to a full Lead Fast Reactor core and a set of criticality benchmarks. (author)
Siregar, S.; Pouw, M E; Moons, K G M; Versteegh, M. I. M.; Bots, M. L.; van der Graaf, Y; Kalkman, C.J.; van Herwerden, L.A.; Groenwold, R. H. H.
2013-01-01
Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted f...
Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi
2011-01-01
We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PMID:21587191
The variance-covariance method: Microdosimetry in time-varying low dose-rate radiation fields
Breckow, Joachim; Wenning, A.; Roos, H; Kellerer, Albrecht M.
1988-01-01
The variance-covariance method is employed at low doses and in radiation fields of low dose rates from an241Am (4 nGy/s) and a90Sr (300 nGy/s) source. The preliminary applications and results illustrate some of the potential of the method, and show that the dose average of lineal energy or energy imparted can be determined over a wide range of doses and dose rates. The dose averages obtained with the variance-covariance method in time-varying fields, for which the conventional variance method...
A benchmark test on common cause failures (CCF) was performed giving interested institutions in Germany the opportunity of demonstrating and justifying their interpretations of events, their methods and models for analyzed CCF. The participants of this benchmark test belonged to expert and consultant organisations and to industrial institutions. The task for the benchmark test was to analyze two typical groups of motor-operated valves in German nuclear power plants. The benchmark test was carried out in two steps. In the first step the participants were to assess in a qualitative way some 200 event-reports on isolation valves. They then were to establish, quantitatively, the reliability parameters for the CCF in the two groups of motor-operated valves using their own methods and their own calculation models. In a second step the reliability parameters were to be recalculated on the basis of a common reference of well defined events, chosen from all given events, in order to analyze the influence of the calculation models on the reliability parameters. (orig.)
Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CFSSDEorgan) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CFSSDEorgan were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCFSSDEorgan were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CFSSDEorgan, was compared to previously published pediatric patient doses that
One of the chief sources of uncertainty in the comparison of patient dosimetry data is the influence of patient size on dose. Dose has been shown to relate closely to the equivalent diameter of the patient. This concept has been used to derive a prospective, phantom based method for determining size correction factors for measurements of dose-area product. The derivation of the size correction factor has been demonstrated mathematically, and the appropriate factor determined for a number of different X-ray sets. The use of phantom measurements enables the effect of patient size to be isolated from other factors influencing patient dose. The derived factors agree well with those determined retrospectively from patient dose survey data. Size correction factors have been applied to the results of a large scale patient dose survey, and this approach has been compared with the method of selecting patients according to their weight. For large samples of data, mean dose-area product values are independent of the analysis method used. The chief advantage of using size correction factors is that it allows all patient data to be included in a survey, whereas patient selection has been shown to exclude approximately half of all patients. (author)
The aim of the present work is to compare and discuss the three of the most advanced two dimensional transport methods, the finite difference and nodal discrete ordinates and surface flux method, incorporated into the transport codes TWODANT, TWOTRAN-NODAL, MULTIMEDIUM and SURCU. For intercomparison the eigenvalue and the neutron flux distribution are calculated using these codes in the LWR pool reactor benchmark problem. Additionally the results are compared with some results obtained by French collision probability transport codes MARSYAS and TRIDENT. Because the transport solution of this benchmark problem is close to its diffusion solution some results obtained by the finite element diffusion code FINELM and the finite difference diffusion code DIFF-2D are included
Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)
Kesharwani, Manoj K; Karton, Amir; Martin, Jan M L
2016-01-12
The relative energies of the YMPJ conformer database of the 20 proteinogenic amino acids, with N- and C-termination, have been re-evaluated using explicitly correlated coupled cluster methods. Lower-cost ab initio methods such as MP2-F12 and CCSD-F12b actually are outperformed by double-hybrid DFT functionals; in particular, the DSD-PBEP86-NL double hybrid performs well enough to serve as a secondary standard. Among range-separated hybrids, ωB97X-V performs well, while B3LYP-D3BJ does surprisingly well among traditional DFT functionals. Treatment of dispersion is important for the DFT functionals; for the YMPJ set, D3BJ generally works as well as the NL nonlocal dispersion functional. Basis set sensitivity for DFT calculations on these conformers is weak enough that def2-TZVP is generally adequate. For conformer corrections to heats of formation, B3LYP-D3BJ and especially DSD-PBEP86-D3BJ or DSD-PBEP86-NL are adequate for all but the most exacting applications. The revised geometries and energetics for the YMPJ database have been made available as Supporting Information and should be useful in the parametrization and validation of molecular mechanics force fields and other low-cost methods. The very recent dRPA75 method yields good performance, without resorting to an empirical dispersion correction, but is still outperformed by DSD-PBEP86-D3BJ and particularly DSD-PBEP86-NL. Core-valence corrections are comparable in importance to improvements beyond CCSD(T*)/cc-pVDZ-F12 in the valence treatment. PMID:26653705
Dose conversion factors for radiation doses at normal operation discharges. F. Methods report
A study has been performed in order to develop and extend existing models for dose estimations at emissions of radioactive substances from nuclear facilities in Sweden. This report gives a review of the different exposure pathways that have been considered in the study. Radioecological data that should be used in calculations of radiation doses are based on the actual situation at the nuclear sites. Dose factors for children have been split in different age groups. The exposure pathways have been carefully re-examined, like the radioecological data; leading to some new pathways (e.g. doses from consumption of forest berries, mushrooms and game) for cesium and strontium. Carbon 14 was given a special treatment by using a model for uptake of carbon by growing plants. For exposure from aquatic emissions, a simplification was done by focussing on the territory for fish species, since consumption of fish is the most important pathway
Benchmarking and Performance Management
Adrian TANTAU
2010-12-01
Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.
Absorbed dose determination in photon fields using the tandem method
Marques-Pachas, J F
1999-01-01
The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF sub 2 : Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with sup 9 sup 0 Sr- sup 9 sup 0 Y, calibrated with the energy of sup 6 sup 0 Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than ...
Purpose/Objective: The potential for on-line error detection using electronic portal images (EPIs) has stimulated the investigation of computer-based methods for matching portal images with reference or 'gold standard' images. The lack of absolute truth for clinical images is a major obstacle to the evaluation of these methods. The purpose of this investigation was to create a set of realistic test EPIs with known setup errors for use as a benchmark for evaluation and intercomparison of computer-based methods, including automatic and user-guided techniques, for EPI analysis. Materials and Methods: Digitally reconstructed electronic portal images (DREPIs) were computed using the visible male CT data set from the National Library of Medicine (NLM). (DREPIs are computed using high energy attenuation coefficients to simulate megavoltage images.) The NLM CT data set comprises 512x512x1 mm contiguous slices from the tip of the head to below the knees. The subject was frozen and scanned very soon after non-traumatizing death, and thus the visualized anatomy closely resembles that of a living person, but without breathing and other motion artifacts. Also since dose was not a consideration the signal-to-noise ratio is higher compared with typical 1 mm slices obtained on a living person. Because of the quality of the CT data, the quality of the DREPIs had to be degraded, and modified in other ways, to create realistic test cases. Modifications included: 1) contrast histogram matching to actual EPIs, 2) addition of structured noise by blending an 'open field' EPI image with the DREPI, 3) addition of random unstructured noise, and 4) Gaussian blurring to simulate patient motion and head scatter effects. (It is important to note that there is no standard appearance or quality for EPIs. The appearance of EPIs is quite variable, especially across EPIDs from different manufacturers. Even for a given system, EPIs are quite sensitive to system calibration and acquisition parameters